5 min read

(For more resources related to this topic, see here.)

The message queue

A message queue, or technically a FIFO (First In First Out) queue is a fundamental and well-studied data structure. There are different queue implementations such as priority queues or double-ended queues that have different features, but the general idea is that the data is added in a queue and fetched when the data or the caller is ready.

Imagine we are using a basic in-memory queue. In case of an issue, such as power outage or a hardware failure, the entire queue could be lost. Hence, another program that expects to receive a message will not receive any messages.

However, adopting a message queue guarantees that messages will be delivered to the destination no matter what happens. Message queuing enables asynchronous communication between loosely-coupled components and also provides solid queuing consistency. In case of insufficient resources, which prevent you from immediately processing the data that is sent, you can queue them up in the message queue server that would store the data until the destination is ready to accept the messages.

Message queuing has an important role in large-scaled distributed systems and enables asynchronous communication. Let’s have a quick overview on the difference between synchronous and asynchronous systems.

In ordinary synchronous systems, tasks are processed one at a time. A task is not processed until the task in-process is finished. This is the simplest way to get the job done.

Synchronous system

We could also implement this system with threads. In this case threads process each task in parallel.

Threaded synchronous system

In the threading model, threads are managed by the operating system itself on a single processor or multiple processors/cores.

Asynchronous Input/Output (AIO) allows a program to continue its execution while processing input/output requests. AIO is mandatory in real-time applications. By using AIO, we could map several tasks to a single thread.

Asynchronous system

The traditional way of programming is to start a process and wait for it to complete. The downside of this approach is that it blocks the execution of the program while there is a task in progress. However, AIO has a different approach. In AIO, a task that does not depend on the process can still continue.

You may wonder why you would use message queue instead of handling all processes with a single-threaded queue approach or multi-threaded queue approach. Let’s consider a scenario where you have a web application similar to Google Images in which you let users type some URLs. Once they submit the form, your application fetches all the images from the given URLs. However:

  • If you use a single-threaded queue, your application would not be able to process all the given URLs if there are too many users

  • If you use a multi-threaded queue approach, your application would be vulnerable to a distributed denial of service attack (DDoS)

  • You would lose all the given URLs in case of a hardware failure In this scenario, you know that you need to add the given URLs into a queue and process them. So, you would need a message queuing system.

Introduction to ZeroMQ

Until now we have covered what a message queue is, which brings us to the purpose of this article, that is, ZeroMQ.

The community identifies ZeroMQ as “sockets on steroids”. The formal definition of ZeroMQ is it is a messaging library that helps developers to design distributed and concurrent applications.

The first thing we need to know about ZeroMQ is that it is not a traditional message queuing system, such as ActiveMQ, WebSphereMQ, or RabbitMQ. ZeroMQ is different. It gives us the tools to build our own message queuing system. It is a library.

It runs on different architectures from ARM to Itanium, and has support for more than 20 programming languages.

Simplicity

ZeroMQ is simple. We can do some asynchronous I/O operations and ZeroMQ could queue the message in an I/O thread. ZeroMQ I/O threads are asynchronous when handling network traffic, so it can do the rest of the job for us. If you have worked on sockets before, you will know that it is quite painful to work on. However, ZeroMQ makes it easy to work on sockets.

Performance

ZeroMQ is fast. The website Second Life managed to get 13.4 microseconds end-to-end latencies and up to 4,100,000 messages per second. ZeroMQ can use multicast transport protocol, which is an efficient method to transmit data to multiple destinations.

The brokerless design

Unlike other traditional message queuing systems, ZeroMQ is brokerless. In traditional message queuing systems, there is a central message server (broker) in the middle of the network and every node is connected to this central node, and each node communicates with other nodes via the central broker. They do not directly communicate with each other.

However, ZeroMQ is brokerless. In a brokerless design, applications can directly communicate with each other without any broker in the middle.

ZeroMQ does not store messages on disk. Please do not even think about it. However, it is possible to use a local swap file to store messages if you set zmq.SWAP.

Summary

This article explained what a message queuing system is, discussed the importance of message queuing, and introduced ZeroMQ to the reader.

Resources for Article :


Further resources on this subject:


 

LEAVE A REPLY

Please enter your comment!
Please enter your name here