41 min read

In this article by Roberto Freato, author of Microsoft Azure Development Cookbook, we mix some of the recipes of of this book, to build a complete overview of what we need to set up a software infrastructure on the cloud.

(For more resources related to this topic, see here.)

Microsoft Azure is Microsoft’s Platform for Cloud Computing. It provides developers with elastic building blocks to build scalable applications. Those building blocks are services for web hosting, storage, computation, connectivity, and more, which are usable as stand-alone services or mixed together to build advanced scenarios. Building an application with Microsoft Azure could really mean choosing the appropriate services and mix them together to run our application.

We start by creating a SQL Database.

Creating a SQL Database server and database

SQL Database is a multitenanted database system in which many distinct databases are hosted on many physical servers managed by Microsoft. SQL Database administrators have no control over the physical provisioning of a database to a particular physical server. Indeed, to maintain high availability, a primary and two secondary copies of each SQL Database are stored on separate physical servers, and users can’t have any control over them.

Consequently, SQL Database does not provide a way for the administrator to specify the physical layout of a database and its logs when creating a SQL Database. The administrator merely has to provide a name, maximum size, and service tier for the database.

A SQL Database server is the administrative and security boundary for a collection of SQL Databases hosted in a single Azure region. All connections to a database hosted by the server go through the service endpoint provided by the SQL Database server. At the time of writing this book, an Azure subscription can create up to six SQL Database servers, each of which can host up to 150 databases (including the master database). These are soft limits that can be increased by arrangement with Microsoft Support.

From a billing perspective, only the database unit is counted towards, as the server unit is just a container. However, to avoid a waste of unused resources, an empty server is automatically deleted after 90 days of non-hosting user databases.

The SQL Database server is provisioned on the Azure Portal. The Region as well as the administrator login and password must be specified during the provisioning process. After the SQL Database server has been provisioned, the firewall rules used to restrict access to the databases associated with the SQL Database server can be modified on the Azure Portal, using Transact SQL or the SQL Database Service Management REST API.

The result of the provisioning process is a SQL Database server identified by a fully -qualified DNS name such as SERVER_NAME.database.windows.net, where SERVER_NAME is an automatically generated (random and unique) string that differentiates this SQL Database server from any other. The provisioning process also creates the master database for the SQL Database server and adds a user and associated login for the administrator specified during the provisioning process. This user has the rights to create other databases associated with this SQL Database server as well as any logins needed to access them.

Remember to distinguish between the SQL Database service and the famous SQL Server engine available on the Azure platform, but as a plain installation over VMs. In the latter case, you will continue to own the complete control of the instance that runs the SQL Server, the installation details, and the effort to maintain it during the time. Also, remember that the SQL Server virtual machines have a different pricing from the standard VMs due to their license costs.

An administrator can create a SQL Database either on the Azure Portal or using the CREATE DATABASE Transact SQL statement.

At the time of this writing this book, SQL Database runs in the following two different modes:

  • Version 1.0: This refers to Web or Business Editions
  • Version 2.0: This refers to Basic, Standard, or Premium service tiers with performance levels

The first version is deprecating in few months. Web Edition was designed for small databases under 5 GB and Business Edition for databases of 10 GB and larger (up to 150 GB). There is no difference in these editions other than the maximum size and billing increment.

The second version introduced service tiers (the equivalent of Editions) with an additional parameter (performance level) that sets the amount of dedicated resource to a given database. The new service tiers (Basic, Standard, and Premium) introduced a lot of advanced features such as active/passive Geo-replication, point-in-time restore, cross-region copy, and restore. Different performance levels have different limits such as the Database Throughput Unit (DTU) and the maximum DB size. An updated list of service tiers and performance levels can be found at http://msdn.microsoft.com/en-us/library/dn741336.aspx.

Once a SQL Database has been created, the ALTER DATABASE Transact SQL statement can be used to alter either the edition or the maximum size of the database. The maximum size is important as the database is made read only once it reaches that size (with the The database has reached its size quota error message and number 40544).

In this recipe, we’ll learn how to create a SQL Database server and a database using the Azure Portal and T-SQL.

Getting Ready

To perform the majority of operations of the recipe, just a plain internet browser is needed. However, to connect directly to the server, we will use the SQL Server Management Studio (also available in the Express version).

How to do it…

First, we are going to create a SQL Database server using the Azure Portal. We will do this using the following steps:

  1. On the Azure Portal, go to the SQL DATABASES section and then select the SERVERS tab.
  2. In the bottom menu, select Add.
  3. In the CREATE SERVER window, provide an administrator login and password.
  4. Select a Subscription and Region that will host the server.

    To enable access from the other service in WA to the server, you can check the Allow Windows Azure Services to access the server checkbox; this is a special firewall rule that allows the 0.0.0.0 to 0.0.0.0 IP range.

  5. Confirm and wait a few seconds to complete the operation.
  6. After that, using the Azure Portal,.go to the SQL DATABASES section and then the SERVERS tab.
  7. Select the previously created server by clicking on its name.
  8. In the server page, go to the DATABASES tab.
  9. In the bottom menu, click on Add; then, after clicking on NEW SQL DATABASE, the CUSTOM CREATE window will open.
  10. Specify a name and select the Web Edition. Set the maximum database size to 5 GB and leave the COLLATION dropdown to its default.

    SQL Database fees are charged differently if you are using the Web/Business Edition rather than the Basic/Standard/Premium service tiers. The most updated pricing scheme for SQL Database can be found at http://azure.microsoft.com/en-us/pricing/details/sql-database/

  11. Verify the server on which you are creating the database (it is specified correctly in the SERVER dropdown) and confirm it.
  12. Alternatively, using Transact SQL, launch Microsoft SQL Server Management Studio and open the Connect to Server window.
  13. In the Server name field, specify the fully qualified name of the newly created SQL Database server in the following form: serverName.database.windows.net.
  14. Choose the SQL Server Authentication method.
  15. Specify the administrative username and password associated earlier.
  16. Click on the Options button and specify the Encrypt connection checkbox.

    This setting is particularly critical while accessing a remote SQL Database. Without encryption, a malicious user could extract all the information to log in to the database himself, from the network traffic. Specifying the Encrypt connection flag, we are telling the client to connect only if a valid certificate is found on the server side.

  17. Optionally check the Remember password checkbox and connect to the server.

    To connect remotely to the server, a firewall rule should be created. In the Object Explorer window, locate the server you connected to, navigate to Databases | System Databases folder, and then right-click on the master database and select New Query.

  18. 18. Copy and execute this query and wait for its completion:.
    CREATE DATABASE DATABASE_NAME
    (
      MAXSIZE = 1 GB
    )

How it works…

The first part is pretty straightforward. In steps 1 and 2, we go to the SQL Database section of the Azure portal, locating the tab to manage the servers. In step 3, we fill the online popup with the administrative login details, and in step 4, we select a Region to place the SQL Database server. As a server (with its database) is located in a Region, it is not possible to automatically migrate it to another Region.

After the creation of the container resource (the server), we create the SQL Database by adding a new database to the newly created server, as stated from steps 6 to 9. In step 10, we can optionally change the default collation of the database and its maximum size.

In the last part, we use the SQL Server Management Studio (SSMS) (step 12) to connect to the remote SQL Database instance. We notice that even without a database, there is a default database (the master one) we can connect to. After we set up the parameters in step 13, 14, and 15, we enable the encryption requirement for the connection. Remember to always set the encryption before connecting or listing the databases of a remote endpoint, as every single operation without encryption consists of plain credentials sent over the network. In step 17, we connect to the server if it grants access to our IP. Finally, in step 18, we open a contextual query window, and in step 19, we execute the creation query, specifying a maximum size for the database.

Note that the Database Edition should be specified in the CREATE DATABASE query as well. By default, the Web Edition is used. To override this, the following query can be used:

CREATE DATABASE MyDB ( Edition='Basic' )

There’s more…

We can also use the web-based Management Portal to perform various operations against the SQL Database, such as invoking Transact SQL commands, altering tables, viewing occupancy, and monitoring the performance. We will launch the Management Portal using the following steps:

  1. Obtain the name of the SQL Database server that contains the SQL Database.
  2. Go to https://serverName.database.windows.net.
  3. In the Database fields, enter the database name (leave it empty to connect to the master database).
  4. Fill the Username and Password fields with the login information and confirm.

Increasing the size of a database

We can use the ALTER DATABASE command to increase the size (or the Edition, with the Edition parameter) of a SQL Database by connecting to the master database and invoking the following Transact SQL command:

ALTER DATABASE DATABASE_NAME
MODIFY
(
  MAXSIZE = 5 GB
)

We must use one of the allowable database sizes.

Connecting to a SQL Database with Entity Framework

The Azure SQL Database is a SQL Server-like fully managed relation database engine. In many other recipes, we showed you how to connect transparently to the SQL Database, as we did in the SQL Server, as the SQL Database has the same TDS protocol as its on-premise brethren. In addition, using the raw ADO.NET could lead to some of the following issues:

  • Hardcoded SQL: In spite of the fact that a developer should always write good code and make no errors, there is the finite possibility to make mistake while writing stringified SQL, which will not be verified at design time and might lead to runtime issues. These kind of errors lead to runtime errors, as everything that stays in the quotation marks compiles. The solution is to reduce every line of code to a command that is compile time safe.
  • Type safety: As ADO.NET components were designed to provide a common layer of abstraction to developers who connect against several different data sources, the interfaces provided are generic for the retrieval of values from the fields of a data row. A developer could make a mistake by casting a field to the wrong data type, and they will realize it only at run time. The solution is to reduce the mapping of table fields to the correct data type at compile time.
  • Long repetitive actions: We can always write our own wrapper to reduce the code replication in the application, but using a high-level library, such as the ORM, can take off most of the repetitive work to open a connection, read data, and so on.

Entity Framework hides the complexity of the data access layer and provides developers with an intermediate abstraction layer to let them operate on a collection of objects instead of rows of tables. The power of the ORM itself is enhanced by the usage of LINQ, a library of extension methods that, in synergy with the language capabilities (anonymous types, expression trees, lambda expressions, and so on), makes the DB access easier and less error prone than in the past.

This recipe is an introduction to Entity Framework, the ORM of Microsoft, in conjunction with the Azure SQL Database.

Getting Ready

The database used in this recipe is the Northwind sample database of Microsoft. It can be downloaded from CodePlex at http://northwinddatabase.codeplex.com/.

How to do it…

We are going to connect to the SQL Database using Entity Framework and perform various operations on data. We will do this using the following steps:

  1. Add a new class named EFConnectionExample to the project.
  2. Add a new ADO.NET Entity Data Model named Northwind.edmx to the project; the Entity Data Model Wizard window will open.
  3. Choose Generate from database in the Choose Model Contents step.
  4. In the Choose Your Data Connection step, select the Northwind connection from the dropdown or create a new connection if it is not shown.
  5. Save the connection settings in the App.config file for later use and name the setting NorthwindEntities.

    If VS asks for the version of EF to use, select the most recent one.

  6. In the last step, choose the object to include in the model. Select the Tables, Views, Stored Procedures, and Functions checkboxes.
  7. Add the following method, retrieving every CompanyName, to the class:
    private IEnumerable<string> NamesOfCustomerCompanies()
    {
        using (var ctx = new NorthwindEntities())
        {
            return ctx.Customers
                .Select(p => p.CompanyName).ToArray();
        }
    }
  8. Add the following method, updating every customer located in Italy, to the class:
    private void UpdateItalians()
    {
        using (var ctx = new NorthwindEntities())
        {
            ctx.Customers.Where(p => p.Country == "Italy")
                .ToList().ForEach(p => p.City = "Milan");
            ctx.SaveChanges();
        }
    }
  9. Add the following method, inserting a new order for the first Italian company alphabetically, to the class:
    private int FirstItalianPlaceOrder()
    {
        using (var ctx = new NorthwindEntities())
        {
            var order = new Orders()
                {
                    EmployeeID = 1,
                    OrderDate = DateTime.UtcNow,
                    ShipAddress = "My Address",
                    ShipCity = "Milan",
                    ShipCountry = "Italy",
                    ShipName = "Good Ship",
                    ShipPostalCode = "20100"
                };
            ctx.Customers.Where(p => p.Country == "Italy")
                .OrderBy(p=>p.CompanyName)
                .First().Orders.Add(order);
            ctx.SaveChanges();
            return order.OrderID;
        }
    }
  10. Add the following method, removing the previously inserted order, to the class:
    private void RemoveTheFunnyOrder(int orderId)
    {
        using (var ctx = new NorthwindEntities())
        {
            var order = ctx.Orders
                .FirstOrDefault(p => p.OrderID == orderId);
            if (order != null) ctx.Orders.Remove(order);
            ctx.SaveChanges();
        }
    }
  11. Add the following method, using the methods added earlier, to the class:
    public static void UseEFConnectionExample()
    {
        var example = new EFConnectionExample();
        var customers=example.NamesOfCustomerCompanies();
        foreach (var customer in customers)
        {
            Console.WriteLine(customer);
        }
        example.UpdateItalians();
        var order=example.FirstItalianPlaceOrder();
        example.RemoveTheFunnyOrder(order);
    }

How it works…

This recipe uses EF to connect and operate on a SQL Database. In step 1, we create a class that contains the recipe, and in step 2, we open the wizard for the creation of Entity Data Model (EDMX). We create the model, starting from an existing database in step 3 (it is also possible to write our own model and then persist it in an empty database), and then, we select the connection in step 4. In fact, there is no reference in the entire code to the Windows Azure SQL Database. The only reference should be in the App.config settings created in step 5; this can be changed to point to a SQL Server instance, leaving the code untouched. The last step of the EDMX creation consists of concrete mapping between the relational table and the object model, as shown in step 6.

This method generates the code classes that map the table schema, using strong types and collections referred to as Navigation properties. It is also possible to start from the code, writing the classes that could represent the database schema. This method is known as Code-First.

In step 7, we ask for every CompanyName of the Customers table. Every table in EF is represented by DbSet<Type>, where Type is the class of the entity. In steps 7 and 8, Customers is DbSet<Customers>, and we use a lambda expression to project (select) a property field and another one to create a filter (where) based on a property value. The SaveChanges method in step 8 persists to the database the changes detected in the disconnected object data model. This magic is one of the purposes of an ORM tool.

In step 9, we use the navigation property (relationship) between a Customers object and the Orders collection (table) to add a new order with sample data. We use the OrderBy extension method to order the results by the specified property, and finally, we save the newly created item. Even now, EF automatically keeps track of the newly added item. Additionally, after the SaveChanges method, EF populates the identity field of Order (OrderID) with the actual value created by the database engine.

In step 10, we use the previously obtained OrderID to remove the corresponding order from the database. We use the FirstOrDefault() method to test the existence of the ID, and then, we remove the resulting object like we removed an object from a plain old collection.

In step 11, we use the methods created to run the demo and show the results.

Deploying a Website

Creating a Website is an administrative task, which is performed in the Azure Portal in the same way we provision every other building block. The Website created is like a “deployment slot”, or better, “web space”, since the abstraction given to the user is exactly that.

Azure Websites does not require additional knowledge compared to an old-school hosting provider, where FTP was the standard for the deployment process. Actually, FTP is just one of the supported deployment methods in Websites, since Web Deploy is probably the best choice for several scenarios.

Web Deploy is a Microsoft technology used for copying files and provisioning additional content and configuration to integrate the deployment process. Web Deploy runs on HTTP and HTTPS with basic (username and password) authentication. This makes it a good choice in networks where FTP is forbidden or the firewall rules are strict.

Some time ago, Microsoft introduced the concept of Publish Profile, an XML file containing all the available deployment endpoints of a particular website that, if given to Visual Studio or Web Matrix, could make the deployment easier. Every Azure Website comes with a publish profile with unique credentials, so one can distribute it to developers without giving them grants on the Azure Subscription.

Web Matrix is a client tool of Microsoft, and it is useful to edit live sites directly from an intuitive GUI. It uses Web Deploy to provide access to the remote filesystem as to perform remote changes.

In Websites, we can host several websites on the same server farm, making administration easier and isolating the environment from the neighborhood. Moreover, virtual directories can be defined from the Azure Portal, enabling complex scenarios or making migrations easier.

In this recipe, we will cope with the deployment process, using FTP and Web Deploy with some variants.

Getting ready

This recipe assumes we have and FTP client installed on the local machine (for example, FileZilla) and, of course, a valid Azure Subscription. We also need Visual Studio 2013 with the latest Azure SDK installed (at the time of writing, SDK Version 2.3).

How to do it…

We are going to create a new Website, create a new ASP.NET project, deploy it through FTP and Web Deploy, and also use virtual directories. We do this as follows:

  1. Create a new Website in the Azure Portal, specifying the following details:
    • The URL prefix (that is, TestWebSite) is set to [prefix].azurewebsites.net
    • The Web Hosting Plan (create a new one)
    • The Region/Location (select West Europe)
  2. Click on the newly created Website and go to the Dashboard tab.
  3. Click on Download the publish profile and save it on the local computer.
  4. Open Visual Studio and create a new ASP.NET web application named TestWebSite, with an empty template and web forms’ references.
  5. Add a sample Default.aspx page to the project and paste into it the following HTML:
    <h1>Root Application</h1>
  6. Press F5 and test whether the web application is displayed correctly.

    Create a local publish target.

  7. Right-click on the project and select Publish.
  8. Select Custom and specify Local Folder.
  9. In the Publish method, select File System and provide a local folder where Visual Studio will save files. Then click on Publish to complete.

    Publish via FTP.

  10. Open FileZilla and then open the Publish profile (saved in step 3) with a text editor.
  11. Locate the FTP endpoint and specify the following:
    • publishUrl as the Host field
    • username as the Username field
    • userPWD as the Password field
  12. Delete the hostingstart.html file that is already present on the remote space.

    When we create a new Azure Website, there is a single HTML file in the root folder by default, which is served to the clients as the default page. By leaving it in the Website, the file could be served after users’ deployments as well if no valid default documents are found.

  13. Drag-and-drop all the contents of the local folder with the binaries to the remote folder, then run the website.

    Publish via Web Deploy.

  14. Right-click on the Project and select Publish.
  15. Go to the Publish Web wizard start and select Import, providing the previously downloaded Publish Profile file.
  16. When Visual Studio reads the Web Deploy settings, it populates the next window. Click on Confirm and Publish the web application.

    Create an additional virtual directory.

  17. Go to the Configure tab of the Website on the Azure Portal.
  18. At the bottom, in the virtual applications and directories, add the following:
    • /app01 with the path siteapp01
    • Mark it as Application
  19. Open the Publish Profile file and duplicate the <publishProfile> tag with the method FTP, then edit the following:
    • Add the suffix App01 to profileName
    • Replace wwwroot with app01 in publishUrl
  20. Create a new ASP.NET web application called TestWebSiteApp01 and create a new Default.aspx page in it with the following code:
    <h1>App01 Application</h1>
  21. Right-click on the TestWebSiteApp01 project and Publish.
  22. Select Import and provide the edited Publish Profile file.
  23. In the first step of the Publish Web wizard (go back if necessary), select the App01 method and select Publish.
  24. Run the Website’s virtual application by appending the /app01 suffix to the site URL.

How it works…

In step 1, we create the Website on the Azure Portal, specifying the minimal set of parameters. If the existing web hosting plan is selected, the Website will start in the specified tier. In the recipe, by specifying a new web hosting plan, the Website is created in the free tier with some limitations in configuration.

The recipe uses the Azure Portal located at https://manage.windowsazure.com. However, the new Azure Portal will be at https://portal.azure.com. New features will be probably added only in the new Portal.

In steps 2 and 3, we download the Publish Profile file, which is an XML containing the various endpoints to publish the Website. At the time of writing, Web Deploy and FTP are supported by default. In steps 4, 5, and 6, we create a new ASP.NET web application with a sample ASPX page and run it locally.

In steps 7, 8, and 9, we publish the binaries of the Website, without source code files, into a local folder somewhere in the local machine. This unit of deployment (the folder) can be sent across the wire via FTP, as we do in steps 10 to 13 using the credentials and the hostname available in the Publish Profile file.

In steps 14 to 16, we use the Publish Profile file directly from Visual Studio, which recognizes the different methods of deployment and suggests Web Deploy as the default one. If we perform the steps 10-13, with steps14-16 we overwrite the existing deployment.

Actually, Web Deploy compares the target files with the ones to deploy, making the deployment incremental for those file that have been modified or added. This is extremely useful to avoid unnecessary transfers and to save bandwidth.

In steps 17 and 18, we configure a new Virtual Application, specifying its name and location. We can use an FTP client to browse the root folder of a website endpoint, since there are several folders such as wwwroot, locks, diagnostics, and deployments.

In step 19, we manually edit the Publish Profile file to support a second FTP endpoint, pointing to the new folder of the Virtual Application. Visual Studio will correctly understand this while parsing the file again in step 22, showing the new deployment option. Finally, we verify whether there are two applications: one on the root folder / and one on the /app01 alias.

There’s more…

Suppose we need to edit the website on the fly, editing a CSS of JS file or editing the HTML somewhere. We can do this using Web Matrix, which is available from the Azure Portal itself through a ClickOnce installation:

  1. Go to the Dashboard tab of the Website and click on WebMatrix at the bottom.
  2. Follow the instructions to install the software (if not yet installed) and, when it opens, select Edit live site directly (the magic is done through the Publish Profile file and Web Deploy).
  3. In the left-side tree, edit the Default.aspx file, and then save and run the Website again.

Azure Websites gallery

Since Azure Websites is a PaaS service, with no lock-in or particular knowledge or framework required to run it, it can hosts several Open Source CMS in different languages. Azure provides a set of built-in web applications to choose while creating a new website. This is probably not the best choice for production environments; however, for testing or development purposes, it should be a faster option than starting from scratch.

Wizards have been, for a while, the primary resources for developers to quickly start off projects and speed up the process of creating complex environments. However, the Websites gallery creates instances of well-known CMS with predefined configurations. Instead, production environments are manually crafted, customizing each aspect of the installation.

To create a new Website using the gallery, proceed as follows:

  1. Create a new Website, specifying from gallery.
  2. Select the web application to deploy and follow the optional configuration steps.

If we create some resources (like databases) while using the gallery, they will be linked to the site in the Linked Resources tab.

Building a simple cache for applications

Azure Cache is a managed service with (at the time of writing this book) the following three offerings:

  • Basic: This service has a unit size of 128 MB, up to 1 GB with one named cache (the default one)
  • Standard: This service has a unit size of 1 GB, up to 10 GB with 10 named caches and support for notifications
  • Premium: This service has a unit size of 5 GB, up to 150 GB with ten named caches, support for notifications, and high availability

    Different offerings have different unit prices, and remember that when changing from one offering to another, all the cache data is lost. In all offerings, users can define the items’ expiration.

The Cache service listens to a specific TCP port. Accessing it from a .NET application is quite simple, with the Microsoft ApplicationServer Caching library available on NuGet. In the Microsoft.ApplicationServer.Caching namespace, the following are all the classes that are needed to operate:

  • DataCacheFactory: This class is responsible for instantiating the Cache proxies to interpret the configuration settings.
  • DataCache: This class is responsible for the read/write operation against the cache endpoint.
  • DataCacheFactoryConfiguration: This is the model class of the configuration settings of a cache factory. Its usage is optional as cache can be configured in the App/Web.config file in a specific configuration section.

Azure Cache is a key-value cache. We can insert and even get complex objects with arbitrary tree depth using string keys to locate them. The importance of the key is critical, as in a single named cache, only one object can exist for a given key. The architects and developers should have the proper strategy in place to deal with unique (and hierarchical) names.

Getting ready

This recipe assumes that we have a valid Azure Cache endpoint of the standard type. We need the standard type because we use multiple named caches, and in later recipes, we use notifications.

We can create a Standard Cache endpoint of 1 GB via PowerShell. Perform the following steps to create the Standard Cache endpoint :

  1. Open the Azure PowerShell and type Add-AzureAccount. A popup window might appear. Type your credentials connected to a valid Azure subscription and continue.
    • Optionally, select the proper Subscription, if not the default one.
  2. Type this command to create a new Cache endpoint, replacing myCache with the proper unique name:
    New-AzureManagedCache -Name myCache -Location "West Europe" -Sku Standard -Memory 1GB
  3. After waiting for some minutes until the endpoint is ready, go to the Azure Portal and look for the Manage Keys section to get one of the two Access Keys of the Cache endpoint.
  4. In the Configure section of the Cache endpoint, a cache named default is created by default. In addition, create two named caches with the following parameters:
    • Expiry Policy: Absolute
    • Time: 10
    • Notifications: Enabled

    Expiry Policy could be Absolute (the default expiration time or the one set by the user is absolute, regardless of how many times the item has been accessed), Sliding (each time the item has been accessed, the expiration timer resets), or Never (items do not expire).

This Azure Cache endpoint is now available in the Management Portal, and it will be used in the entire article.

How to do it…

We are going to create a DataCache instance through a code-based configuration. We will perform simple operations with Add, Get, Put, and Append/Prepend, using a secondary-named cache to transfer all the contents of the primary one.

We will do this by performing the following steps:

  1. Add a new class named BuildingSimpleCacheExample to the project.
  2. Install the Microsoft.WindowsAzure.Caching NuGet package.
  3. Add the following using statement to the top of the class file:
    using Microsoft.ApplicationServer.Caching;
  4. Add the following private members to the class:
    private DataCacheFactory factory = null;
    private DataCache cache = null;
  5. Add the following constructor to the class:
    public BuildingSimpleCacheExample(string ep,
        string token,string cacheName)
    {
        DataCacheFactoryConfiguration config 
            = new DataCacheFactoryConfiguration();
        config.AutoDiscoverProperty
            = new DataCacheAutoDiscoverProperty(true, ep);
        config.SecurityProperties
            = new DataCacheSecurity(token, true);
                
        factory = new DataCacheFactory(config);
        cache = factory.GetCache(cacheName);
    }
  6. Add the following method, creating a palindrome string into the cache:
    public void CreatePalindromeInCache()
    {
        var objKey = "StringArray";            
        cache.Put(objKey, "");
        char letter = 'A';
        for (int i = 0; i < 10; i++)
        {
            cache.Append(objKey,
                char.ConvertFromUtf32((letter+i)));
            cache.Prepend(objKey, 
                char.ConvertFromUtf32((letter + i)));
        }
        Console.WriteLine(cache.Get(objKey));
    }
  7. Add the following method, adding an item into the cache to analyze its subsequent retrievals:
    public void AddAndAnalyze()
    {
        var randomKey = DateTime.Now.Ticks.ToString();
        var value="Cached string";
        cache.Add(randomKey, value);
        DataCacheItem cacheItem = cache.GetCacheItem(randomKey);
        Console.WriteLine(string.Format(
            "Item stored in {0} region with {1} expiration",
            cacheItem.RegionName,cacheItem.Timeout));
        cache.Put(randomKey, value, TimeSpan.FromSeconds(60));
        cacheItem = cache.GetCacheItem(randomKey);
        Console.WriteLine(string.Format(
            "Item stored in {0} region with {1} expiration",
            cacheItem.RegionName, cacheItem.Timeout));
    
        var version = cacheItem.Version;
        var obj = cache.GetIfNewer(randomKey, ref version);
        if (obj == null)
        {
            //No updates
        }            
    }
  8. Add the following method, transferring the contents of the cache named initially into a second one:
    public void BackupToDestination(string destCacheName)
    {            
        var destCache = factory.GetCache(destCacheName);
        var dump = cache.GetSystemRegions()
            .SelectMany(p => cache.GetObjectsInRegion(p))
            .ToDictionary(p=>p.Key,p=>p.Value);
        foreach (var item in dump)
        {
            destCache.Put(item.Key, item.Value);
        }
    }
  9. Add the following method to clear the cache named first:
    public void ClearCache()
    {
        cache.Clear();
    }
  10. Add the following method, using the methods added earlier, to the class:
    public static void RunExample()
    {
        var cacheName = "[named cache 1]";
        var backupCache = "[named cache 2]";
        string endpoint = "[cache endpoint]";
        string token = "[cache token/key]";
    
        BuildingSimpleCacheExample example 
            = new BuildingSimpleCacheExample(endpoint,
                token, cacheName);
        example.CreatePalindromeInCache();
        example.AddAndAnalyze();
        example.BackupToDestination(backupCache);
        example.ClearCache();
    }

How it works…

From steps 1 to 3, we set up the class. In step 4, we add private members to store the DataCacheFactory object used to create the DataCache object to access the Cache service. In the constructor that we add in step 5, we initialize the DataCacheFactory object using a configuration model class (DataCacheFactoryConfiguration). This strategy is for code-based initialization whenever settings cannot stay in the App.config/Web.config file.

In step 6, we use the Put() method to write an empty string into the StringArray bucket. We then use the Append() and Prepend() methods, designed to concatenate strings to existing strings, to build a palindrome string in the memory cache.

This sample does not make any sense in real-world scenarios, and we must pay attention to some of the following issues:

Writing an empty string into the cache is somehow useless.

Each Append() or Prepend() operation travels on TCP to the cache and goes back. Though it is very simple, it requires resources, and we should always try to consolidate calls.

In step 7, we use the Add() method to add a string to the cache. The difference between the Add() and Put() methods is that the first method throws an exception if the item already exists, while the second one always overwrites the existing value (or writes it for the first time). GetCacheItem() returns a DataCacheItem object, which wraps the value together with other metadata properties, such as the following:

  • CacheName: This is the named cache where the object is stored.
  • Key: This is the key of the associated bucket.
  • RegionName (user defined or system defined): This is the region of the cache where the object is stored.
  • Size: This is the size of the object stored.
  • Tags: These are the optional tags of the object, if it is located in a user-defined region.
  • Timeout: This is the current timeout before the object would expire.
  • Version: This is the version of the object. This is a DataCacheItemVersion object whose properties are not accessible due to their modifier. However, it is not important to access this property, as the Version object is used as a token against the Cache service to implement the optimistic concurrency. As for the timestamp value, its semantic can stay hidden from developers.

The first Add() method does not specify a timeout for the object, leaving the default global expiration timeout, while the next Put() method does, as we can check in the next Get() method. We finally ask the cache about the object with the GetIfNewer() method, passing the latest version token we have. This conditional Get method returns null if the object we own is already the latest one.

In step 8, we list all the keys of the first named cache, using the GetSystemRegions() method (to first list the system-defined regions), and for each region, we ask for their objects, copying them into the second named cache. In step 9, we clear all the contents of the first cache.

In step 10, we call the methods added earlier, specifying the Cache endpoint to connect to and the token/password, along with the two named caches in use. Replace [named cache 1], [named cache 2], [cache endpoint], and [cache token/key] with actual values.

There’s more…

Code-based configuration is useful when the settings stay in a different place as compared to the default config files for .NET. It is not a best practice to hardcode them, so this is the standard way to declare them in the App.config file:

<configSections>
  <section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection,
  Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere" />
</configSections>

The XML mentioned earlier declares a custom section, which should be as follows:

<dataCacheClients>
  <dataCacheClient name="[name of cache]">
    <autoDiscover isEnabled="true" identifier="[domain of cache]" />
    <securityProperties mode="Message" sslEnabled="true">
      <messageSecurity authorizationInfo="[token of endpoint]" />
    </securityProperties>
  </dataCacheClient>
</dataCacheClients>

In the upcoming recipes, we will use this convention to set up the DataCache objects.

ASP.NET Support

With almost no effort, the Azure Cache can be used as Output Cache in ASP.NET to save the session state. To enable this, in addition to the configuration mentioned earlier, we need to include those declarations in the <system.web> section as follows:

<sessionState mode="Custom" customProvider="AFCacheSessionStateProvider">
  <providers>
<add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheSessionState"/>
  </providers>
</sessionState>
<caching>
  <outputCache defaultProvider="AFCacheOutputCacheProvider">
    <providers>
      <add name="AFCacheOutputCacheProvider" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheOutputCache" />
    </providers>
  </outputCache>
</caching>

The difference between [name of cache] and [named cache] is as follows:

The [name of cache] part is a friendly name of the cache client declared above an alias.

The [named cache] part is the named cache created into the Azure Cache service.

Connecting to the Azure Storage service

In an Azure Cloud Service, the storage account name and access key are stored in the service configuration file. By convention, the account name and access key for data access are provided in a setting named DataConnectionString. The account name and access key needed for Azure Diagnostics must be provided in a setting named Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString.

The DataConnectionString setting must be declared in the ConfigurationSettings section of the service definition file. However, unlike other settings, the connection string setting for Azure Diagnostics is implicitly defined when the Diagnostics module is specified in the Imports section of the service definition file. Consequently, it must not be specified in the ConfigurationSettings section.

A best practice is to use different storage accounts for application data and diagnostic data. This reduces the possibility of application data access being throttled by competition for concurrent writes from the diagnostics monitor.

What is Throttling?

In shared services, where the same resources are shared between tenants, limiting the concurrent access to them is critical to provide service availability. If a client misuses the service or, better, generates a huge amount of traffic, other tenants pointing to the same shared resource could experience unavailability. Throttling (also known as Traffic Control plus Request Cutting) is one of the most adopted solutions that is solving this issue.

It also provides a security boundary between application data and diagnostics data, as diagnostics data might be accessed by individuals who should have no access to application data.

In the Azure Storage library, access to the storage service is through one of the Client classes. There is one Client class for each Blob service, Queue service, and Table service; they are CloudBlobClient, CloudQueueClient, and CloudTableClient, respectively. Instances of these classes store the pertinent endpoint as well as the account name and access key.

The CloudBlobClient class provides methods to access containers, list their contents, and get references to containers and blobs. The CloudQueueClient class provides methods to list queues and get a reference to the CloudQueue instance that is used as an entry point to the Queue service functionality. The CloudTableClient class provides methods to manage tables and get the TableServiceContext instance that is used to access the WCF Data Services functionality while accessing the Table service. Note that the CloudBlobClient, CloudQueueClient, and CloudTableClient instances are not thread safe, so distinct instances should be used when accessing these services concurrently.

The client classes must be initialized with the account name, access key, as well as the appropriate storage service endpoint. The Microsoft.WindowsAzure namespace has several helper classes. The StorageCredential class initializes an instance from an account name and access key or from a shared access signature.

In this recipe, we’ll learn how to use the CloudBlobClient, CloudQueueClient, and CloudTableClient instances to connect to the storage service.

Getting ready

This recipe assumes that the application’s configuration file contains the following:

<appSettings>
  <add key="DataConnectionString"
value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
   <add key="AccountName" value="{ACCOUNT_NAME}"/>
   <add key="AccountKey" value="{ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the storage account name and access key, respectively.

We are not working in a Cloud Service but in a simple console application. Storage services, like many other building blocks of Azure, can also be used separately from on-premise environments.

How to do it…

We are going to connect to the Table service, the Blob service, and the Queue service, and perform a simple operation on each. We will do this using the following steps:

  1. Add a new class named ConnectingToStorageExample to the project.
  2. Add the following using statements to the top of the class file:
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Blob;
    using Microsoft.WindowsAzure.Storage.Queue;
    using Microsoft.WindowsAzure.Storage.Table;
    using Microsoft.WindowsAzure.Storage.Auth;
    using System.Configuration;

    The System.Configuration assembly should be added via the Add Reference action onto the project, as it is not included in most of the project templates of Visual Studio.

  3. Add the following method, connecting the blob service, to the class:
    private static void UseCloudStorageAccountExtensions()
    {
        CloudStorageAccount cloudStorageAccount =
            CloudStorageAccount.Parse(
            ConfigurationManager.AppSettings[
            "DataConnectionString"]);
    
        CloudBlobClient cloudBlobClient =
            cloudStorageAccount.CreateCloudBlobClient();
        CloudBlobContainer cloudBlobContainer =
            cloudBlobClient.GetContainerReference(
            "{NAME}");
    
        cloudBlobContainer.CreateIfNotExists();
    }
  4. Add the following method, connecting the Table service, to the class:
    private static void UseCredentials()
    {
        string accountName = ConfigurationManager.AppSettings[
            "AccountName"];
        string accountKey = ConfigurationManager.AppSettings[
            "AccountKey"];
        StorageCredentials storageCredentials =
            new StorageCredentials(
            accountName, accountKey);
    
        CloudStorageAccount cloudStorageAccount =
            new CloudStorageAccount(storageCredentials, true);
        CloudTableClient tableClient =
            new CloudTableClient(
            cloudStorageAccount.TableEndpoint,
            storageCredentials);
    
        CloudTable table = 
            tableClient.GetTableReference("{NAME}");
        table.CreateIfNotExists();
    }
  5. Add the following method, connecting the Queue service, to the class:
    private static void UseCredentialsWithUri()
    {
        string accountName = ConfigurationManager.AppSettings[
            "AccountName"];
        string accountKey = ConfigurationManager.AppSettings[
            "AccountKey"];
        StorageCredentials storageCredentials =
            new StorageCredentials(
            accountName, accountKey);
    
        StorageUri baseUri =
            new StorageUri(new Uri(string.Format(
                "https://{0}.queue.core.windows.net/",
            accountName)));
        CloudQueueClient cloudQueueClient =
            new CloudQueueClient(baseUri, storageCredentials);
        CloudQueue cloudQueue =
            cloudQueueClient.GetQueueReference("{NAME}");
    
        cloudQueue.CreateIfNotExists();
    }
  6. Add the following method, using the other methods, to the class:
    public static void UseConnectionToStorageExample()
    {
      UseCloudStorageAccountExtensions();
      UseCredentials();
      UseCredentialsWithUri();
    }

How it works…

In steps 1 and 2, we set up the class. In step 3, we implement the standard way to access the storage service using the Storage Client library. We use the static CloudStorageAccount.Parse() method to create a CloudStorageAccount instance from the value of the connection string stored in the configuration file. We then use this instance with the CreateCloudBlobClient() extension method of the CloudStorageAccount class to get the CloudBlobClient instance that we use to connect to the Blob service. We can also use this technique with the Table service and the Queue service, using the relevant extension methods, CreateCloudTableClient() and CreateCloudQueueClient(), respectively, for them. We complete this example using the CloudBlobClient instance to get a CloudBlobContainer reference to a container and then create it if it does not exist We need to replace {NAME} with the name for a container.

In step 4, we create a StorageCredentials instance directly from the account name and access key. We then use this to construct a CloudStorageAccount instance, specifying that any connection should use HTTPS. Using this technique, we need to provide the Table service endpoint explicitly when creating the CloudTableClient instance. We then use this to create the table. We need to replace {NAME} with the name of a table. We can use the same technique with the Blob service and Queue service using the relevant CloudBlobClient or CloudQueueClient constructor.

In step 5, we use a similar technique, except that we avoid the intermediate step of using a CloudStorageAccount instance and explicitly provide the endpoint for the Queue service. We use the CloudQueueClient instance created in this step to create the queue. We need to replace {NAME} with the name of a queue. Note that we hardcoded the endpoint for the Queue service.

Though this last method is officially supported, it is not a best practice to bind our code to hardcoded strings with endpoint URIs. So, it is preferable to use one of the previous methods that hides the complexity of the URI generation at the library level.

In step 6, we add a method that invokes the methods added in the earlier steps.

There’s more…

With the general availability of the .NET Framework Version 4.5, many libraries of the CLR have been added with the support of asynchronous methods with the Async/Await pattern. Latest versions of the Azure Storage Library also have these overloads, which are useful while developing mobile applications, and fast web APIs. They are generally useful when it is needed to combine the task execution model into our applications.

Almost each long-running method of the library has its corresponding methodAsync() method to be called as follows:

await cloudQueue.CreateIfNotExistsAsync();

In the rest of the book, we will continue to use the standard, synchronous pattern.

Adding messages to a Storage queue

The CloudQueue class in the Azure Storage library provides both synchronous and asynchronous methods to add a message to a queue. A message comprises up to 64 KB bytes of data (48 KB if encoded in Base64). By default, the Storage library Base64 encodes message content to ensure that the request payload containing the message is valid XML. This encoding adds overhead that reduces the actual maximum size of a message.

A message for a queue should not be intended to transport a big payload, since the purpose of a Queue is just messaging and not storing. If required, a user can store the payload in a Blob and use a Queue message to point to that, letting the receiver fetch the message along with the Blob from its remote location.

Each message added to a queue has a time-to-live property after which it is deleted automatically. The maximum and default time-to-live value is 7 days.

In this recipe, we’ll learn how to add messages to a queue.

Getting ready

This recipe assumes the following code is in the application configuration file:

<appSettings>
  <add key="DataConnectionString"
value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values of the account name and access key.

How to do it…

We are going to create a queue and add some messages to it. We do this as follows:

  1. Add a new class named AddMessagesOnStorageExample to the project.
  2. Install the WindowsAzure.Storage NuGet package and add the following assembly references to the project:
    System.Configuration
  3. Add the following using statements to the top of the class file:
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Queue;
    using System.Configuration;
  4. Add the following private member to the class:
    private CloudQueue cloudQueueClient;
  5. Add the following constructor to the class:
    public AddMessagesOnStorageExample(String queueName)
    {
        CloudStorageAccount cloudStorageAccount =
            CloudStorageAccount.Parse(
            ConfigurationManager.AppSettings[
            "DataConnectionString"]);
        CloudQueueClient cloudQueueClient =
            cloudStorageAccount.CreateCloudQueueClient();
        cloudQueue = cloudQueueClient.GetQueueReference(queueName);
        cloudQueue.CreateIfNotExists();
    }
  6. Add the following method to the class, adding two messages:
    public void AddMessages()
    {
        String content1 = "Do something";
        CloudQueueMessage message1 = new CloudQueueMessage(content1);
        cloudQueue.AddMessage(message1);
    
        String content2 = "Do something that expires in 1 day";
        CloudQueueMessage message2 = new CloudQueueMessage(content2);
        cloudQueue.AddMessage(message2, TimeSpan.FromDays(1.0));
    
        String content3 = "Do something that expires in 2 hours,"+
            " starting in 1 hour from now";
        CloudQueueMessage message3 = new CloudQueueMessage(content3);
        cloudQueue.AddMessage(message2, 
            TimeSpan.FromHours(2),TimeSpan.FromHours(1));
    }
  7. Add the following method, that uses the AddMessage() method, to the class:
    public static void UseAddMessagesExample()
    {
        String queueName = "{QUEUE_NAME}";
        AddMessagesOnStorageExample example = new AddMessagesOnStorageExample (queueName);
        example.AddMessages();
    }

How it works…

In steps 1 through 3, we set up the class. In step 4, we add a private member to store the CloudQueue object used to interact with the Queue service. We initialize this in the constructor we add in step 5 where we also create the queue.

In step 6, we add a method that adds three messages to a queue. We create three CloudQueueMessage objects. We add the first message to the queue with the default time-to-live of seven days, the second is added specifying an expiration of 1 day, and the third will become visible after 1 hour since its entrance in the queue, with an absolute expiration of 2 hours.

Note that a client (library) exception is thrown if we specify a visibility delay higher than the absolute TTL of the message. This is naturally obvious and it is enforced at the client side, instead making a (failing) server call.

In step 7, we add a method that invokes the methods we added earlier. We need to replace {QUEUE_NAME} with an appropriate name for a queue.

There’s more…

To clear the queue from the messages we added in this recipe, we can proceed by calling the Clear() method in the CloudQueue class as follows:

public void ClearQueue()
{
    cloudQueue.Clear();
}

Summary

In this article, we have learned some of the recipes in order to build a complete overview of the software infrastructure that we need to set up on the cloud.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here