Home Cloud & Networking Cloud Computing Key Features Explained

Key Features Explained

0
859
21 min read

Service Bus

The Windows Azure Service Bus provides a hosted, secure, and widely available infrastructure for widespread communication, large-scale event distribution, naming, and service publishing. Service Bus provides connectivity options for Windows Communication Foundation (WCF) and other service endpoints, including REST endpoints, that would otherwise be difficult or impossible to reach. Endpoints can be located behind Network Address Translation (NAT) boundaries, or bound to frequently changing, dynamically assigned IP addresses, or both.

Getting started

To get started and use the features of Services Bus, you need to make sure you have the Windows Azure SDK installed.

Queues

Learn Programming & Development with a Packt Subscription

Queues in the AppFabric feature (different from Table Storage queues) offer a FIFO message delivery capability. This can be an outcome for those applications that expect messages in a certain order. Just like with ordinary Azure Queues, Service Bus Queues enable the decoupling of your application components and can still function, even if some parts of the application are offline. Some differences between the two types of queues are (for example) that the Service Bus Queues can hold larger messages and can be used in conjunction with Access Control Service.

Working with queues

To create a queue, go to the Windows Azure portal and select the Service Bus, Access Control & Caching tab. Next, select Service Bus, select the namespace, and click on New Queue. The following screen will appear. If you did not set up a namespace earlier you need to create a namespace before you can create a queue:

There are some properties that can be configured during the setup process of a queue. Obviously, the name uniquely identifies the queue in the namespace. Default Message Time To Live configures messages having this default TTL. This can also be set in code and is a TimeSpan value.

Duplicate Detection History Time Window implicates how long the message ID (unique) of the received messages will be retained to check for duplicate messages. This property will be ignored if the Required Duplicate Detection option is not set.

Keep in mind that a long detection history results in the persistency of message IDs during that period. If you process many messages, the queue size will grow and so does your bill.

When a message expires or when the limit of the queue size is reached, it will be deadlettered. This means that they will end up in a different queue named $DeadLetterQueue. Imagine a scenario where a lot of traffic in your queue results in messages in the dead letter queue. Your application should be robust and process these messages as well.

The lock duration property defines the duration of the lock when the PeekLock() method is called. The PeekLock() method hides a specific message from other consumers/processors until the lock duration expires. Typically, this value needs to be sufficient to process and delete the message.

A sample scenario

Remember the differences between the two queue types that Windows Azure offers, where the Service Bus queues are able to guarantee first-in first-out and to support transactions. The scenario is when a user posts a geotopic on the canvas containing text and also uploads a video by using the parallel upload functionality. What should happen next is for the WCF service CreateGeotopic() to post a message in the queue to enter the geotopic, but when the file finishes uploading, there is also a message sent to the queue. These two together should be in a single transaction. Geotopia.Processor processes this message but only if the media file is finished uploading. In this example, you can see how a transaction is handled and how a message can be abandoned and made available on the queue again. If the geotopic is validated as a whole (file is uploaded properly), the worker role will reroute the message to a designated audit trail queue to keep track of actions made by the system and also send to a topic (see next section) dedicated to keeping messages that need to be pushed to possible mobile devices. The messages in this topic will again be processed by a worker role. The reason for choosing a separate worker role is that it creates a role, a loosely-coupled solution, and possible to be fine-grained by only scaling the back-end worker role.

See the following diagram for an overview of this scenario:

In the previous section, we already created a queue named geotopicaqueue. In order to work with queues, you need the service identity (in this case we use a service identity with a symmetric issuer and the key credentials) of the service namespace.

Preparing the project

In order to make use of the Service Bus capabilities, you need to add a reference to Microsoft.ServiceBus.dll, located in <drive>:Program FilesMicrosoft SDKsWindows Azure.NET SDK2012-06ref. Next, add the following using statements to your file:

using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging;

Your project is now ready to make use of Service Bus queues.

"Endpoint=sb://<servicenamespace>.servicebus.windows. net/;SharedSecretIssuer=<issuerName>;SharedSecretValue=<yoursecret>"

The properties of the queue you configured in the Windows Azure portal can also be set programmatically.

Sending messages

Messages that are sent to a Service Bus queue are instances of BrokeredMessage. This class contains standard properties such as TimeToLive and MessageId. An important property is Properties, which is of type IDictionary<string, object>, where you can add additional data. The body of the message can be set in the constructor of BrokerMessage, where the parameter must be of a type decorated with the [Serializable] attribute.

The following code snippet shows how to send a message of type BrokerMessage:

MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageSender sender = factory.CreateMessageSender("geotopiaqueue"); sender.Send(new BrokeredMessage( new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, MediaFile = MediaFile //Uri of uploaded mediafile }));

As the scenario depicts a situation where two messages are expected to be sent in a certain order and to be treated as a single transaction, we need to add some more logic to the code snippet.

Right before this message is sent, the media file is uploaded by using the BlobUtil class. Consider sending the media file together with BrokeredMessage if it is small enough. This might be a long-running operation, depending on the size of the file. The asynchronous upload process returns Uri, which is passed to BrokeredMessage.

The situation is:

  • A multimedia file is uploaded from the client to Windows Azure Blob storage using a parallel upload (or passed on in the message). A Parallel upload is breaking up the media file in several chunks and uploading them separately by using multithreading.

  • A message is sent to geotopiaqueue, and Geotopia.Processor processes the messages in the queues in a single transaction.

Receiving messages

On the other side of the Service Bus queue resides our worker role, Geotopia.Processor, which performs the following tasks:

  • It grabs the messages from the queue

  • Sends the message straight to a table in Windows Azure Storage for auditing purposes

  • Creates a geotopic that can be subscribed to (see next section)

The following code snippet shows how to perform these three tasks:

MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageReceiver receiver = factory.CreateMessageReceiver("geotopiaqueue "); BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); }

Cross-domain communication

We created a new web role in our Geotopia solution, hosting the WCF services we want to expose. As the client is a Silverlight one (and runs in the browser), we face cross-domain communication. To protect against security vulnerabilities and to prevent cross-site requests from a Silverlight client to some services (without the notice of the user), Silverlight by default allows only site-of-origin communication. A possible exploitation of a web application is cross-site forgery, exploits that can occur when cross-domain communication is allowed; for example, a Silverlight application sending commands to some service running on the Internet somewhere.

As we want the Geotopia Silverlight client to access the WCF service running in another domain, we need to explicitly allow cross-domain operations. This can be achieved by adding a file named clientaccesspolicy.xml at the root of the domain where the WCF service is hosted and allowing this cross-domain access. Another option is to add a crossdomain.xml file at the root where the service is hosted.

Please go to http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx to find more details on the cross-domain communication issues.

Comparison

The following table shows the similarities and differences between Windows Azure and Service Bus queues:

Criteria

Windows Azure queue

Service Bus queue

Ordering guarantee

No, but based on besteffort first-in, first out

First-in, first-out

Delivery guarantee

At least once

At most once; use the PeekLock() method to ensure that no messages are missed. PeekLock() together with the Complete() method enable a two-stage receive operation.

Transaction support

No

Yes, by using TransactionScope

Receive Mode

Peek & Lease

Peek & Lock

Receive & Delete

Lease/Lock duration

Between 30 seconds and 7 days

Between 60 seconds and 5 minutes

Lease/Lock granularity

Message level

Queue level

Batched Receive

Yes, by using GetMessages(count)

Yes, by using the prefetch property or the use of transactions

Scheduled Delivery

Yes

Yes

Automatic dead lettering

No

Yes

In-place update

Yes

No

Duplicate detection

No

Yes

WCF integration

No

Yes, through WCF bindings

WF integration

Not standard; needs a customized activity

Yes, out-of-the-box activities

Message Size

Maximum 64 KB

Maximum 256 KB

Maximum queue size

100 TB, the limits of a storage account

1, 2, 3, 4, or 5 GB; configurable

Message TTL

Maximum 7 days

Unlimited

Number of queues

Unlimited

10,000 per service namespace

Mgmt protocol

REST over HTTP(S)

REST over HTTPS

Runtime protocol

REST over HTTP(S)

REST over HTTPS

TCP with TLS

Queue naming rules

Maximum of 63 characters

Maximum of 260 characters

Queue length function

Yes, value is approximate

Yes, exact value

Throughput

Maximum of 2,000 messages/second

Maximum of 2,000 messages/second

Authentication

Symmetric key

ACS claims

Role-based access control

No

Yes through ACS roles

Identity provider federation

No

Yes

Costs

$0.01 per 10,000 transactions

$ 0.01 per 10,000 transactions

Billable operations

Every call that touches “storage”‘

Only Send and Receive operations

Storage costs

$0.14 per GB per month

None

ACS transaction costs

None, since ACS is not supported

$1.99 per 100,000 token requests

Background information

There are some additional characteristics of Service Bus queues that need your attention:

  • In order to guarantee the FIFO mechanism, you need to use messaging sessions.

  • Using Receive & Delete in Service Bus queues reduces transaction costs, since it is counted as one.

  • The maximum size of a Base64-encoded message on the Window Azure queue is 48 KB and for standard encoding it is 64 KB.

  • Sending messages to a Service Bus queue that has reached its limit will throw an exception that needs to be caught.

  • When the throughput has reached its limit, the HTTP 503 error response is returned from the Windows Azure queue service. Implement retrying logic to tackle this issue.

  • Throttled requests (thus being rejected) are not billable.

  • ACS transactions are based on instances of the message factory class. The received token will expire after 20 minutes, meaning that you will only need three tokens per hour of execution.

Topics and subscriptions

Topics and subscriptions can be useful in a scenario where (instead of a single consumer, in the case of queues) multiple consumers are part of the pattern. Imagine in our scenario where users want to be subscribed to topics posted by friends. In such a scenario, a subscription is created on a topic and the worker role processes it; for example, mobile clients can be push notified by the worker role.

Sending messages to a topic works in a similar way as sending messages to a Service Bus queue.

Preparing the project

In the Windows Azure portal, go to the Service Bus, Access Control & Caching tab. Select Topics and create a new topic, as shown in the following screenshot:

Next, click on OK and a new topic is created for you. The next thing you need to do is to create a subscription on this topic. To do this, select New Subscription and create a new subscription, as shown in the following screenshot:

Using filters

Topics and subscriptions, by default, it is a push/subscribe mechanism where messages are made available to registered subscriptions. To actively influence the subscription (and subscribe only to those messages that are of your interest), you can create subscription filters. SqlFilter can be passed as a parameter to the CreateSubscription method of the NamespaceManager class. SqlFilter operates on the properties of the messages so we need to extend the method.

In our scenario, we are only interested in messages that are concerning a certain subject. The way to achieve this is shown in the following code snippet:

BrokeredMessage message = new BrokeredMessage(new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, mediaFile = fileContent }); //used for topics & subscriptions message.Properties["subject"] = subject;

The preceding piece of code extends BrokeredMessage with a subject property that can be used in SqlFilter. A filter can only be applied in code on the subscription and not in the Windows Azure portal. This is fine, because in Geotopia, users must be able to subscribe to interesting topics, and for every topic that does not exist yet, a new subscription is made and processed by the worker role, the processor. The worker role contains the following code snippet in one of its threads:

Uri uri = ServiceBusEnvironment.CreateServiceUri ("sb", "<yournamespace>", string.Empty); string name = "owner"; string key = "<yourkey>"; //get some credentials TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(name, key); // Create namespace client NamespaceManager namespaceClient = new NamespaceManager(ServiceBusEnvironment.CreateServiceUri ("sb", "geotopiaservicebus", string.Empty), tokenProvider); MessagingFactory factory = MessagingFactory.Create(uri, tokenProvider); BrokeredMessage message = new BrokeredMessage(); message.Properties["subject"] = "interestingsubject"; MessageSender sender = factory.CreateMessageSender("dataqueue"); sender.Send(message); //message is send to topic SubscriptionDescription subDesc = namespaceClient.CreateSubscription("geotopiatopic", "SubscriptionOnMe", new SqlFilter("subject='interestingsubject'")); //the processing loop while(true) { MessageReceiver receiver = factory.CreateMessageReceiver ("geotopiatopic/subscriptions/SubscriptionOnMe"); //it now only gets messages containing the property 'subject' //with the value 'interestingsubject' BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } }

Windows Azure Caching

Windows Azure offers caching capabilities out of the box. Caching is fast, because it is built as an in-memory (fast), distributed (running on different servers) technology.

Windows Azure Caching offers two types of cache:

  • Caching deployed on a role

  • Shared caching

When you decide to host caching on your Windows Azure roles, you need to pick from two deployment alternatives. The first is dedicated caching, where a worker role is fully dedicated to run as a caching store and its memory is used for caching. The second option is to create a co-located topology, meaning that a certain percentage of available memory in your roles is assigned and reserved to be used for in-memory caching purposes. Keep in mind that the second option is the most costeffective one, as you don’t have a role running just for its memory.

Shared caching is the central caching repository managed by the platform which is accessible for your hosted services. You need to register the shared caching mechanism on the portal in the Service Bus, Access Control & Caching section of the portal. You need to configure a namespace and the size of the cache (remember, there is money involved). This caching facility is a shared one and runs inside a multitenant environment.

Caching capabilities

Both the shared and dedicated caching offer a rich feature set. The following table depicts this:

Feature

Explanation

ASP.NET 4.0 caching providers Programming model

When you build ASP.NET 4.0 applications and deploy them on Windows Azure, the platform will install caching providers for them. This enables your ASP.NET 4.0  applications to use caching easily.

 

You can use the Microsoft.ApplicationServer.Caching namespace to perform CRUD operations on your cache. The application using the cache is responsible for populating and reloading the cache, as the programming model is based on the cache-aside pattern. This means that initially the cache is empty and will be populated during the lifetime of the application. The application checks whether the desired data is present. If not, the  application reads it from (for example) a database and inserts it into the cache.

 

The caching mechanism deployed on one of your roles, whether dedicated or not, lives up to the high availability of Windows Azure. It saves copies of your items in cache, in case a role instance goes down.

Configuration model

Configuration of caching (server side) is not relevant in the case of shared caching, as this is the standard, out-of-the-box functionality that can only vary in size, namespace, and location.

 

It is possible to create named caches. Every single cache has its own configuration settings, so you can really fine-tune your caching requirements. All settings are stored in the service definition and service configuration files. As the settings of named caches are stored in JSON format, they are difficult to read.

 

If one of your roles wants to access Windows Azure Cache, it needs some configuration as well. A DataCacheFactory object is used to return the DataCache objects that represent the named caches. Client cache settings are stored in the designated app.config or web.config files.

 

A configuration sample is shown later on in this section, together with some code snippets.

Security model

The two types of caching (shared and role-based) have two different ways of handling security.

 

Role-based caching is secured by its endpoints, and only those which are allowed to use these endpoints are permitted to touch the cache. Shared caching is secured by the use of an authentication token.

Concurrency model

As multiple clients can access and modify cache items simultaneously, there are concurrency issues to take care of; both optimistic and pessimistic concurrency models are available.

 

In the optimistic concurrency model, updating any objects in the cache does not result in locks. Updating an item in the cache will only take place if Azure detects that the updated version is the same as the one that currently resides in the cache.

 

When you decide to use the pessimistic concurrency model, items are locked explicitly by the cache client. When an item is locked, other lock requests are rejected by the platform. Locks need to be released by the client or after some configurable time-out, in order to prevent eternal locking.

Regions and tagging

Cached items can be grouped together in a so-called region. Together with additional tagging of cached items, it is possible to search for tagged items within a certain region. Creating a region results in adding cache items to be stored on a single server (analogous to partitioning). If additional backup copies are enabled, the region with all its items is also saved on a different server, to maintain availability.

Notifications

It is possible to have your application notified by Windows Azure when cache operations occur. Cache notifications exist for both operations on regions and items. A notification is sent when CreateRegion, ClearRegion, or RemoveRegion is executed. The operations AddItem, ReplaceItem, and RemoveItem on cached items also cause notifications to be sent.

 

Notifications can be scoped on the cache, region, and item level. This means you can configure them to narrow the scope of notifications and only receive those that are relevant to your applications.

 

Notifications are polled by your application at a configurable interval.

Availability

To keep up the high availability you are used to on Windows Azure, configure your caching role(s) to maintain backup copies. This means that the platform replicates copies of your cache within your deployment across different fault domains.

Local caching

To minimize the number of roundtrips between cache clients and the Windows Azure cache, enable local caching. Local caching means that every cache clients maintains a reference to the item in-memory itself. Requesting that same item again will cause an object returned from the local cache instead of the role-based cache. Make sure you choose the right lifetime for your objects, otherwise you might work with outdated cached items.

Expiration and Eviction

Cache items can be removed explicitly or implicitly by expiration or eviction.

 

The process of expiration means that the caching facility removes items from the cache automatically. Items will be removed after their time-out value expires, but keep in mind that locked items will not be removed even if they pass their expiration date. Upon calling the Unlock method, it is possible to extend the expiration date of the cached item.

 

To ensure that there is sufficient memory available for caching purposes, the least recently used (LRU) eviction is supported. The process of eviction means that memory will be cleared and cached items will be evicted when certain memory thresholds are exceeded.

 

By default, Shared Cache items expire after 48 hours. This behavior can be overridden by the overloads of the Add and Put methods.

Setting it up

To enable role-based caching, you need to configure it in Visual Studio. Open the Caching tab of the properties of your web or worker role (you decide which role is the caching one). Fill out the settings, as shown in the following screenshot:

The configuration settings in this example cause the following to happen:

  • Role-based caching is enabled.

  • The specific role will be a dedicated role just for caching.

  • Besides the default cache, there are two additional, named caches for different purposes. The first is a high-available cache for recently added geotopics with a sliding window. This means that every time an item is accessed, its expiration time is reset to the configured 10 minutes. For our geotopics, this is a good approach, since access to recently posted geotopics is heavy at first but will slow down as time passes by (and thus they will be removed from the cache eventually). The second named cache is specifically for profile pictures with a long time-to-live, as these pictures will not change too often.

Caching examples

In this section, several code snippets explain the use of Window Azure caching and clarify different features. Ensure that you get the right assemblies for Windows Azure Caching by running the following command in the Package Manager Console: Install-Package Microsoft.WindowsAzure.Caching. Running this command updates the designated config file for your project. Replace the [cache cluster role name] tag in the configuration file with the name of the role that hosts the cache.

Adding items to the cache

The following code snippet demonstrates how to access a named cache and how to add and retrieve items from it (you will see the use of tags and the sliding window):

DataCacheFactory cacheFactory = new DataCacheFactory(); DataCache geotopicsCache = cacheFactory.GetCache("RecentGeotopics"); //get reference to this named cache geotopicsCache.Clear(); //clear the whole cache DataCacheTag[] tags = new DataCacheTag[] { new DataCacheTag("subject"), new DataCacheTag("test")}; //add a short time to live item DataCacheItemVersion version = geotopicsCache.Add(geotopicID, new Geotopic(), TimeSpan.FromMinutes(1)/* overrides default 10 minutes */, tags); //add a default item geotopicsCache.Add("defaultTTL", new Geotopic()); //default 10 minutes //let time pass for some minutes DataCacheItem item = geotopicsCache.GetCacheItem(geotopicID); // returns null! DataCacheItem defaultItem = geotopicsCache.GetCacheItem("defaultTTL"); //sliding window shows up //versioning, optimistic locking geotopicsCache.Put("defaultTTL", new Geotopic(), defaultItem.Version); //will fail if versions are not equal!

Session state and output caching

Two interesting areas in which Windows Azure caching can be applied are caching the session state of ASP.NET applications and the caching of HTTP responses, for example, complete pages.

In order to use Windows Azure caching (that is, the role-based version), to maintain the session state, you need to add the following code snippet to the web.config file for your web application:

<sessionState mode="Custom" customProvider="AppFabricCacheSessionStor eProvider"> <providers> <add name="AppFabricCacheSessionStoreProvider" type="Microsoft.Web.DistributedCache. DistributedCacheSessionStateStoreProvider, Microsoft.Web. DistributedCache" cacheName="default" useBlobMode="true" dataCacheClientName="default" /> </providers> </sessionState>

The preceding XML snippet causes your web application to use the default cache that you configured on one of your roles.

To enable output caching, add the following section to your web.config file:

<caching> <outputCache defaultProvider="DistributedCache"> <providers> <add name="DistributedCache" type="Microsoft.Web.DistributedCache. DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="default" dataCacheClientName="default" /> </providers> </outputCache> </caching>

This will enable output caching for your web application, and the default cache will be used for this. Specify a cache name, if you have set up a specific cache for output caching purposes. The pages to be cached determine how long they will remain in the cache and set the different version of the page, depending on the parameter combinations.

<%@ OutputCache Duration="60" VaryByParam="*" %>

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here