13 min read

(For more resources related to this topic, see here.)

Service Bus

The Windows Azure Service Bus provides a hosted, secure, and widely available infrastructure for widespread communication, large-scale event distribution, naming, and service publishing. Service Bus provides connectivity options for Windows Communication Foundation (WCF) and other service endpoints, including REST endpoints, that would otherwise be difficult or impossible to reach. Endpoints can be located behind Network Address Translation (NAT) boundaries, or bound to frequently changing, dynamically assigned IP addresses, or both.

Getting started

To get started and use the features of Services Bus, you need to make sure you have the Windows Azure SDK installed.

Queues

Queues in the AppFabric feature (different from Table Storage queues) offer a FIFO message delivery capability. This can be an outcome for those applications that expect messages in a certain order. Just like with ordinary Azure Queues, Service Bus Queues enable the decoupling of your application components and can still function, even if some parts of the application are offline. Some differences between the two types of queues are (for example) that the Service Bus Queues can hold larger messages and can be used in conjunction with Access Control Service.

Working with queues

To create a queue, go to the Windows Azure portal and select the Service Bus, Access Control & Caching tab. Next, select Service Bus, select the namespace, and click on New Queue. The following screen will appear. If you did not set up a namespace earlier you need to create a namespace before you can create a queue:

There are some properties that can be configured during the setup process of a queue. Obviously, the name uniquely identifies the queue in the namespace. Default Message Time To Live configures messages having this default TTL. This can also be set in code and is a TimeSpan value.

Duplicate Detection History Time Window implicates how long the message ID (unique) of the received messages will be retained to check for duplicate messages. This property will be ignored if the Required Duplicate Detection option is not set.

Keep in mind that a long detection history results in the persistency of message IDs during that period. If you process many messages, the queue size will grow and so does your bill.

When a message expires or when the limit of the queue size is reached, it will be deadlettered . This means that they will end up in a different queue named $DeadLetterQueue. Imagine a scenario where a lot of traffic in your queue results in messages in the dead letter queue. Your application should be robust and process these messages as well.

The lock duration property defines the duration of the lock when the PeekLock() method is called. The PeekLock() method hides a specific message from other consumers/processors until the lock duration expires. Typically, this value needs to be sufficient to process and delete the message.

A sample scenario

Remember the differences between the two queue types that Windows Azure offers, where the Service Bus queues are able to guarantee first-in first-out and to support transactions. The scenario is when a user posts a geotopic on the canvas containing text and also uploads a video by using the parallel upload functionality. What should happen next is for the WCF service CreateGeotopic() to post a message in the queue to enter the geotopic, but when the file finishes uploading, there is also a message sent to the queue. These two together should be in a single transaction. Geotopia.Processor processes this message but only if the media file is finished uploading. In this example, you can see how a transaction is handled and how a message can be abandoned and made available on the queue again. If the geotopic is validated as a whole (file is uploaded properly), the worker role will reroute the message to a designated audit trail queue to keep track of actions made by the system and also send to a topic (see next section) dedicated to keeping messages that need to be pushed to possible mobile devices. The messages in this topic will again be processed by a worker role. The reason for choosing a separate worker role is that it creates a role, a loosely-coupled solution, and possible to be fine-grained by only scaling the back-end worker role.

See the following diagram for an overview of this scenario:

In the previous section, we already created a queue named geotopicaqueue. In order to work with queues, you need the service identity (in this case we use a service identity with a symmetric issuer and the key credentials) of the service namespace.

Preparing the project

In order to make use of the Service Bus capabilities, you need to add a reference to Microsoft.ServiceBus.dll, located in <drive>:Program FilesMicrosoft SDKsWindows Azure.NET SDK2012-06ref. Next, add the following using statements to your file:

using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging;

Your project is now ready to make use of Service Bus queues.

In the configuration settings of the web role project hosting the WCF services, add a new configuration setting named ServiceBusQueue with the following value:

“Endpoint=sb://<servicenamespace>.servicebus.windows. net/;SharedSecretIssuer=<issuerName>;SharedSecretValue=<yoursecret>”

The properties of the queue you configured in the Windows Azure portal can also be set programmatically.

Sending messages

Messages that are sent to a Service Bus queue are instances of BrokeredMessage. This class contains standard properties such as TimeToLive and MessageId. An important property is Properties, which is of type IDictionary<string, object>, where you can add additional data. The body of the message can be set in the constructor of BrokerMessage, where the parameter must be of a type decorated with the [Serializable] attribute.

The following code snippet shows how to send a message of type BrokerMessage:

MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageSender sender = factory.CreateMessageSender(“geotopiaqueue”); sender.Send(new BrokeredMessage( new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, MediaFile = MediaFile //Uri of uploaded mediafile }));

As the scenario depicts a situation where two messages are expected to be sent in a certain order and to be treated as a single transaction, we need to add some more logic to the code snippet.

Right before this message is sent, the media file is uploaded by using the BlobUtil class. Consider sending the media file together with BrokeredMessage if it is small enough. This might be a long-running operation, depending on the size of the file. The asynchronous upload process returns Uri, which is passed to BrokeredMessage.

The situation is:

  • A multimedia file is uploaded from the client to Windows Azure Blob storage using a parallel upload (or passed on in the message). A Parallel upload is breaking up the media file in several chunks and uploading them separately by using multithreading.
  • A message is sent to geotopiaqueue, and Geotopia.Processor processes the messages in the queues in a single transaction.

Receiving messages

On the other side of the Service Bus queue resides our worker role, Geotopia. Processor, which performs the following tasks:

  • It grabs the messages from the queue
  • Sends the message straight to a table in Windows Azure Storage for auditing purposes
  • Creates a geotopic that can be subscribed to

The following code snippet shows how to perform these three tasks:

MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageReceiver receiver = factory.CreateMessageReceiver(“geotopiaqueue “); BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); }

Cross-domain communication

We created a new web role in our Geotopia solution, hosting the WCF services we want to expose. As the client is a Silverlight one (and runs in the browser), we face cross-domain communication. To protect against security vulnerabilities and to prevent cross-site requests from a Silverlight client to some services (without the notice of the user), Silverlight by default allows only site-of-origin communication. A possible exploitation of a web application is cross-site forgery, exploits that can occur when cross-domain communication is allowed; for example, a Silverlight application sending commands to some service running on the Internet somewhere.

As we want the Geotopia Silverlight client to access the WCF service running in another domain, we need to explicitly allow cross-domain operations. This can be achieved by adding a file named clientaccesspolicy.xml at the root of the domain where the WCF service is hosted and allowing this cross-domain access. Another option is to add a crossdomain.xml file at the root where the service is hosted.

Please go to http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx to find more details on the cross-domain communication issues.

Comparison

The following table shows the similarities and differences between Windows Azure and Service Bus queues:

Criteria

Windows Azure queue

Service Bus queue

Ordering guarantee

No, but based on best effort first-in, first out

First-in, first-out

Delivery guarantee

At least once

At most once; use the PeekLock() method to ensure that no messages are missed. PeekLock() together with the Complete() method enable a two-stage receive operation.

Transaction support

No

Yes, by using TransactionScope

Receive Mode

Peek & Lease

Peek & Lock

Receive & Delete

Lease/Lock duration

Between 30 seconds and 7 days

Between 60 seconds and 5 minutes

Lease/Lock granularity

Message level

Queue level

Batched Receive

Yes, by using GetMessages(count)

Yes, by using the prefetch property or the use of transactions

Scheduled Delivery

Yes

Yes

Automatic dead lettering

No

Yes

In-place update

Yes

No

Duplicate detection

No

Yes

WCF integration

No

Yes, through WCF bindings

WF integration

Not standard; needs a customized activity

Yes, out-of-the-box activities

Message Size

Maximum 64 KB

Maximum 256 KB

Maximum queue size

100 TB, the limits of a storage account

1, 2, 3, 4, or 5 GB; configurable

Message TTL

Maximum 7 days

Unlimited

Number of queues

Unlimited

10,000 per service namespace

Mgmt protocol

REST over HTTP(S)

REST over HTTP(S)

Runtime protocol

REST over HTTP(S)

REST over HTTP(S)

Queue naming rules

Maximum of 63 characters

Maximum of 260 characters

Queue length function

Yes, value is approximate

Yes, exact value

Throughput

Maximum of 2,000 messages/second

Maximum of 2,000 messages/second

Authentication

Symmetric key

ACS claims

Role-based access control

No

Yes through ACS roles

Identity provider federation

No

Yes

Costs

$0.01 per 10,000 transactions

$ 0.01 per 10,000 transactions

Billable operations

Every call that touches “storage”‘

Only Send and Receive operations

Storage costs

$0.14 per GB per month

None

ACS transaction costs

None, since ACS is not supported

$1.99 per 100,000 token requests

Background information

There are some additional characteristics of Service Bus queues that need your attention:

  • In order to guarantee the FIFO mechanism, you need to use messaging sessions.
  • Using Receive & Delete in Service Bus queues reduces transaction costs, since it is counted as one.
  • The maximum size of a Base64-encoded message on the Window Azure queue is 48 KB and for standard encoding it is 64 KB.
  • Sending messages to a Service Bus queue that has reached its limit will throw an exception that needs to be caught.
  • When the throughput has reached its limit, the HTTP 503 error response is returned from the Windows Azure queue service. Implement retrying logic to tackle this issue.
  • Throttled requests (thus being rejected) are not billable.
  • ACS transactions are based on instances of the message factory class. The received token will expire after 20 minutes, meaning that you will only need three tokens per hour of execution.

Topics and subscriptions

Topics and subscriptions can be useful in a scenario where (instead of a single consumer, in the case of queues) multiple consumers are part of the pattern. Imagine in our scenario where users want to be subscribed to topics posted by friends. In such a scenario, a subscription is created on a topic and the worker role processes it; for example, mobile clients can be push notified by the worker role.

Sending messages to a topic works in a similar way as sending messages to a Service Bus queue.

Preparing the project

In the Windows Azure portal, go to the Service Bus, Access Control & Caching tab. Select Topics and create a new topic, as shown in the following screenshot:

Next, click on OK and a new topic is created for you. The next thing you need to do is to create a subscription on this topic. To do this, select New Subscription and create a new subscription, as shown in the following screenshot:

Using filters

Topics and subscriptions, by default, it is a push/subscribe mechanism where messages are made available to registered subscriptions. To actively influence the subscription (and subscribe only to those messages that are of your interest), you can create subscription filters. SqlFilter can be passed as a parameter to the CreateSubscription method of the NamespaceManager class. SqlFilter operates on the properties of the messages so we need to extend the method.

In our scenario, we are only interested in messages that are concerning a certain subject. The way to achieve this is shown in the following code snippet:

BrokeredMessage message = new BrokeredMessage(new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, mediaFile = fileContent }); //used for topics & subscriptions message.Properties[“subject”] = subject;

The preceding piece of code extends BrokeredMessage with a subject property that can be used in SqlFilter. A filter can only be applied in code on the subscription and not in the Windows Azure portal. This is fine, because in Geotopia, users must be able to subscribe to interesting topics, and for every topic that does not exist yet, a new subscription is made and processed by the worker role, the processor. The worker role contains the following code snippet in one of its threads:

Uri uri = ServiceBusEnvironment.CreateServiceUri (“sb”, “<yournamespace>”, string.Empty); string name = “owner”; string key = “<yourkey>”; //get some credentials TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(name, key); // Create namespace client NamespaceManager namespaceClient = new NamespaceManager(ServiceBusEnvironment.CreateServiceUri (“sb”, “geotopiaservicebus”, string.Empty), tokenProvider); MessagingFactory factory = MessagingFactory.Create(uri, tokenProvider); BrokeredMessage message = new BrokeredMessage(); message.Properties[“subject”] = “interestingsubject”; MessageSender sender = factory.CreateMessageSender(“dataqueue”); sender.Send(message); //message is send to topic SubscriptionDescription subDesc = namespaceClient.CreateSubscription(“geotopiatopic”, “SubscriptionOnMe”, new SqlFilter(“subject=’interestingsubject'”)); //the processing loop while(true) { MessageReceiver receiver = factory.CreateMessageReceiver (“geotopiatopic/subscriptions/SubscriptionOnMe”); //it now only gets messages containing the property ‘subject’ //with the value ‘interestingsubject’ BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } }

Windows Azure Caching

Windows Azure offers caching capabilities out of the box. Caching is fast, because it is built as an in-memory (fast), distributed (running on different servers) technology.

Windows Azure Caching offers two types of cache:

  • Caching deployed on a role
  • Shared caching

When you decide to host caching on your Windows Azure roles, you need to pick from two deployment alternatives. The first is dedicated caching, where a worker role is fully dedicated to run as a caching store and its memory is used for caching. The second option is to create a co-located topology, meaning that a certain percentage of available memory in your roles is assigned and reserved to be used for in-memory caching purposes. Keep in mind that the second option is the most costeffective one, as you don’t have a role running just for its memory.

Shared caching is the central caching repository managed by the platform which is accessible for your hosted services. You need to register the shared caching mechanism on the portal in the Service Bus, Access Control & Caching section of the portal. You need to configure a namespace and the size of the cache (remember, there is money involved). This caching facility is a shared one and runs inside a multitenant environment.

LEAVE A REPLY

Please enter your comment!
Please enter your name here