8 min read

In this article by Martin Toshev, author of book Learning RabbitMQ, we will create a simple RabbitMQ cluster with three nodes on the local machine. The steps we can follow in order to do this are:

  • Disable all plug-ins before starting the node instances – this is required in order to avoid problems with plug-ins such as the management-plugin, which already runs on port 15672 – if you don’t disable it and it is already running as part of the another RabbitMQ instance on the same machine, then attempting to start a node will fail since the node will try to start the management plug-in on the same port unless you provide a different configuration with a different management port for the particular plug-in.
  • You don’t have to worry about this since the RabbitMQ management plug-in is aware of clusters and it is sufficient to start the plug-in only for one of the instances in the cluster. If you want to enable a failover configuration for the management plug-in you can start it for two or more nodes running on different ports.
  • Start three independent RabbitMQ node instances on the current machine.
  • Add nodes to the cluster by specifying at least one active node in the cluster for the purpose. You can specify more than one active node in the cluster but at least one is needed to join the node to all the other nodes currently in the cluster.

(For more resources related to this topic, see here.)

The first step can be accomplished by executing the following command:

rabbitmq-plugins.bat disable rabbitmq_management

The second step is also pretty straightforward. The root node in the cluster is already present – that is the instance of RabbitMQ that we were running so far. You just need to execute the following commands in order to the start two more independent nodes (named instance2 and instance3 and running on different ports):

set RABBITMQ_NODENAME=instance1 &

set RABBITMQ_NODE_PORT=5701 &

rabbitmq-server.bat –detached

set RABBITMQ_NODENAME=instance2 &

set RABBITMQ_NODE_PORT=5702 &

rabbitmq-server.bat –detached

If you are using a Unix-based operating system, the preceding commands will look like the following:

RABBITMQ_NODENAME=instance1 &&
RABBITMQ_NODE_PORT=5701 &&
./rabbitmq-server.sh –detached

RABBITMQ_NODENAME=instance2 &&
RABBITMQ_NODE_PORT=5702 &&
./rabbitmq-server.bat –detached

If you are using the default installation on Windows then a standalone instance will already be running with some specified name (upon installation of the broker) and using the default node port of 5672 and distribution port of 25672 (the 20000 + node port value). That is why we need to specify different names and distribution ports when starting the instances.

Adding nodes to the cluster

Now let’s add the two nodes we created and started to the cluster – currently consisting only of a single node. To verify this, you can run the following command:

rabbitmqctl.bat cluster_status

You will see something like this in the output:

[s
{nodes,
   [{disc,[rabbit@DOMAIN]}]},
{running_nodes,[rabbit@DOMAIN]},
{cluster_name,>},
{partitions,[]}
]

The cluster configuration lists the current nodes in the cluster – these could be either DISK or RAM nodes. By default, nodes are created as DISK nodes, meaning that they persist cluster metadata on disk. RAM nodes allow for optimizations among the cluster nodes since they store everything in memory rather than persisting information on disk. This trade-off between loss of data and performance depends upon the particular messaging requirements of the application. In the preceding example, we can see that there is only one DISK node currently running and that the name of the cluster is inherited from the name of the root node.

Let’s add the instance1 node to the cluster:

rabbitmqctl.bat -ninstance1 stop_app
rabbitmqctl.bat -n instance1 join_cluster rabbit@DOMAIN
rabbitmqctl.bat -n instance1 start_app

In case instance1 was not a new instance and already had some metadata such as queues, exchanges, or vhosts, then after the app_stop step you have to clear the state of the node as follows before joining it to the cluster:

rabbitmqctl.bat –n instance1 reset

If the preceding commands succeed, you should get the following sequence of messages:

Stopping node instance1@Domain ...
Clustering node instance1@Domain with rabbit@DOMAIN ...
Starting node instance1@Domain ...

Now let’s also add the second node to the cluster:

rabbitmqctl.bat -n instance2 stop_app
rabbitmqctl.bat -n instance2 join_cluster rabbit@DOMAIN
rabbitmqctl.bat -n instance2 start_app

Note that you have to provide only a single node in the cluster rather than a list of all nodes – RabbitMQ automatically clusters the node with all other nodes existing in the cluster . We simply specify just one of them (the only condition is that the node must be up-and-running).

If we check again the configuration of the cluster:

rabbitmqctl.bat cluster_status

We will see something like this:

[
{nodes,
   [{disc,[instance1@DOMAIN, instance2@DOMAIN, rabbit@DOMAIN]}]},
{running_nodes,[instance1@DOMAIN, instance2@DOMAIN, rabbit@DOMAIN]},
{cluster_name,>},
{partitions,[]}
]

Since the management console is already enabled for the root node in the cluster (rabbit@DOMAIN), if we go the Overview tab we will see the three nodes under the Nodes section:

At that point we have a fully functional RabbitMQ cluster with three DISC nodes. Let’s see how to add RAM nodes to our cluster.

If you notice, there are some statistics displayed for the root node such used/available Erlang processes, used/available memory, and a few others. However, for the other two nodes we added to the cluster a Node statistics not available message is being displayed. This is due to the fact that we have disabled the management plug-in for the two nodes before starting them and it uses the rabbitmq_management_agent plug-in that is required in order to display the statistics for the instances from the RabbitMQ management plug-in running over a cluster node. The following enables the management agent plug-in on the instances:

rabbitmq-plugins.bat -n instance1 enable               rabbitmq_management_agent

rabbitmq-plugins.bat -n instance2 enable               rabbitmq_management_agent

If we now go to the Overview tab, we will see that statistics are displayed for all three nodes:

We can also configure the RabbitMQ cluster nodes directly in the RabbitMQ configuration—we just specify a list of running RabbitMQ instances as identified by their name—and once the node starts up it will try to cluster against the list of nodes. There are some prerequisites when RabbitMQ tries to create the cluster from the configuration—the nodes must be in a clean state, the same version of RabbitMQ must be running over them, and they must have the same Erlang cookie. To make sure that the nodes are in a clean state (if they are not newly created), reset they state with the rabbitmqctl utility:

rabbitmqctl.bat -n instance1 reset

To make sure they are running the same version of RabbitMQ you can use the rabbitmqctl utility again:

rabbitmqctl.bat –n instance1 status

Note that in our case the preceding code is not relevant since we are running he instances from the same installation of RabbitMQ. If the instances were running on different versions of the broker (on the same or different machines), then we could upgrade all of the nodes with the same version of RabbitMQ. In order to perform the upgrade, however, we must designate one of the DISK nodes as the upgrader node that will synchronize the cluster nodes once the upgrade is done – that node should be stopped last and started first when the entire cluster is brought down for the upgrade of the nodes. To make sure the nodes have the same cookie, just copy it over to all the nodes from the root node in the cluster.

Another consideration is that nodes might be running behind firewalls and in that case you have to make sure that the ports used by RabbitMQ are opened–one is 4369 (unless changed) and is used by the epmd port mapper process that is used to resolve host names in the cluster. The other port is the distribution port for the node – for instance1 in our case that is 5701 and for instance2 5702 (these are the ports we assigned to the nodes when starting them).

Adding RAM-only nodes to the cluster

Adding a RAM only node to our cluster is similar to how we add a DISK node but with one additional parameter. The following example adds the instance3 RAM node to the cluster:

set RABBITMQ_NODENAME=instance3 &
set RABBITMQ_NODE_PORT=5703 &
rabbitmq-server.bat –detached

rabbitmqctl.bat -n instance3 stop_app

rabbitmqctl.bat -n instance3 join_cluster --ram         rabbit@DOMAIN

rabbitmqctl.bat -n instance3 start_app

If we now check the cluster status:

rabbitmqctl.bat cluster_status

We will see that the instance3 node is registered as a RAM node to the cluster:

[

{nodes,

   [{disc,[instance1@DOMAIN, instance2@DOMAIN, rabbit@DOMAIN],

     {ram,[instance3@Domain]}

   ]},

{running_nodes,[ instance3@Domain, instance1@DOMAIN, instance2@DOMAIN,                rabbit@DOMAIN]},

{cluster_name,>},

{partitions,[]}

]

You can also switch the node to DISK mode using the rabbitmqctl utility – you must first stop the running RabbitMQ application on the node:

rabbitmqctl.bat -n instance3 change_cluster_node_type disk

Summary

In this article we have learnt how to create a cluster with three nodes on the local machine.

Resources for Article:


Further resources on this subject:



Subscribe to the weekly Packt Hub newsletter. We'll send you this year's Skill Up Developer Skills Report.

* indicates required

LEAVE A REPLY

Please enter your comment!
Please enter your name here