27 min read

In this article by Randall Smith, the author of the book Docker Orchestration, we will see how to use Docker Swarm to orchestrate a Docker cluster.

Docker Swarm is the native orchestration tool for Docker. It was rolled into the core docker suite with version 1.12. At its simplest, using Docker Swarm is just like using Docker. All of the tools that have been covered still work. Swarm adds a couple of features that make deploying and updating services very nice.

(For more resources related to this topic, see here.)

Setting up a Swarm

The first step into running a Swarm is to have a number of hosts ready with Docker installed. It does not matter if you use the install script from get.docker.com or if you use Docker Machine. You also need to be sure that a few ports are open between the servers, as given here:

  • 2377 tcp: cluster management
  • 7946 tcp and udp: node communication
  • 4789 tcp and udp: overlay network

Take a moment to get or assign a static IP address to the hosts that will be the swarm managers. Each manager must have a static IP address so that workers know how to connect to them. Worker addresses can be dynamic but the IP of the manager must be static.

As you plan your swarm, take a moment to decide how many managers you are going to need. The minimum number to run and still maintain fault tolerance is three. For larger clusters, you many need as many as five or seven. Very rarely will you need more than that. In any case, the number of managers should be odd. Docker Swarm can maintain a quorum as long as 50% + 1 managers are running. Having two or four managers provides no additional fault tolerance than one or three.

If possible, you should spread your managers out so that a single failure will not take down your swarm. For example, make sure that they are not all running on the same VM host or connected to the same switch. The whole point is to have multiple managers to keep the swarm running in the event of a manager failure. Do not undermine your efforts by allowing a single point of failure somewhere else taking down your swarm.

Initializing a Swarm

If you are installing from your desktop, it may be better to use Docker Machine to connect to the host as this will set up the necessary TLS keys. Run the docker swarm init command to initialize the swarm. The –advertize-addr flag is optional. By default, it will guess an address on the host. This may not be correct. To be sure, set it to the static IP address you have for the manager host:    

$ docker swarm init --advertise-addr 172.31.26.152

Swarm initialized: current node (6d6e6kxlyjuo9vb9w1uug95zh) is now a manager.

To add a worker to this swarm, run the following command:

 docker swarm join 

   --token SWMTKN-1-0061dt2qjsdr4gabxryrksqs0b8fnhwg6bjhs8cxzen7tmarbi-89mmok3p

f6dsa5n33fb60tx0m                                                           

   172.31.26.152:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

Included in the output of the init command is the command to run on each worker host. It is also possible to run it from your desktop if you have set your environment to use the worker host. You can save the command somewhere, but you can always get it again by running docker swarm join-token manager, where manager is the name of a swarm manager.

As part of the join process, Docker Swarm will create TLS keys, which will be used to encrypt communication between the managers and the hosts. It will not, however, encrypt network traffic between containers. The certificates are updated every three months. The time period can be updated by running docker swarm update –cert-expiry duration. The duration is specified as number of hours and minutes. For example, setting the duration to 1000h will tell Docker Swarm to reset the certificate every 1,000 hours.

Managing a Swarm

One of the easiest pieces to overlook when it comes to orchestrating Docker is how to manage the cluster itself. In this section, you will see how to manage your swarm including adding and removing nodes, changing their availability, and backup and recovery.

Adding a node

New worker nodes can be added at any time. Install Docker, then run docker swarm join-token worker to get the docker swarm join command to join the host to the swarm. Once added, the worker will be available to run tasks. Take note that the command to join the cluster is consistent. This makes it easy to add to a host configuration script and join the swarm on boot.

You can get a list of all of the nodes in the swarm by running docker node ls:

$ docker node ls

ID                           HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1i3wtacdjz5p509bu3l3qfbei   worker1   Ready   Active       Reachable

3f2t4wwwthgcahs9089b8przv   worker2   Ready   Active       Reachable

6d6e6kxlyjuo9vb9w1uug95zh * manager   Ready   Active       Leader

The list will not only show the machines but it will also show if they are active in the swarm and if they are a manager. The node marked as Leader is the master of the swarm. It coordinates the managers.

Promoting and demoting nodes

In Docker Swarm, just like in real life, managers are also workers. This means that managers can run tasks. It also means that workers can be promoted to become managers. This can be useful to quickly replace a failed manager or to seamlessly increase the number of managers. The following command promotes the node named worker1 to be a manager:

$ docker node promote worker1                                                          

Node worker1 promoted to a manager in the swarm.

Managers can also be demoted to become plain workers. This should be done before a manager node is going to be decommissioned. The following command demotes worker1 back to a plain worker:

$ docker node demote worker1                                                          

Manager worker1 demoted in the swarm.

Whatever your reasons for promoting or demoting node be, make sure that when you are done, there are an odd number of managers.

Changing node availability

Docker Swarm nodes have a concept of availability. The availability of a node determines whether or not tasks can be scheduled on that node. Use the docker node update –availability <state><node-id>command to set the availability state. There are three availability states that can be set—pause, drain, and active.

Pausing a node

Setting a node’s availability to pause will prevent the scheduler from assigning new tasks to the node. Existing tasks will continue to run. This can be useful for troubleshooting load issues on a node or for preventing new tasks from being assigned to an already overloaded node. The following command pauses worker2:

$ docker node update --availability pause worker2

worker2

$ docker node ls

ID                           HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1i3wtacdjz5p509bu3l3qfbei   worker1   Ready   Active       Reachable

3f2t4wwwthgcahs9089b8przv   worker2   Ready   Pause         Reachable

6d6e6kxlyjuo9vb9w1uug95zh * manager   Ready   Active       Leader

Do not rely on using pause to deal with overload issues. It is better to place reasonable resource limits on your services and let the scheduler figure things out for you. You can use pause to help determine what the resource limits should be. For example, you can start a task on a node, then pause the node to prevent new tasks from running while you monitor resource usage.

Draining a node

Like pause, setting a node’s availability to drain will stop the scheduler from assigning new tasks to the node. In addition, drain will stop any running tasks and reschedule them to run elsewhere in the swarm. The drain mode has two common purposes.

First, it is useful for preparing a node for an upgrade. Containers will be stopped and rescheduled in an orderly fashion. Updates can then be applied and the node rebooted, if necessary, without further disruption. The node can be set to active again once the updates are complete.

Remember that, when draining a node, running containers have to be stopped and restarted elsewhere. This can cause disruption if your applications are not built to handle failure. The great thing about containers is that they start quickly, but some services, such as MySQL, take a few seconds to initialize.

The second use of drain is to prevent services from being scheduled on manager nodes. Manager processing is very reliant on messages being passed in a timely manner. An unconstrained task running on a manager node can cause a denial of service outage for the node causing problems for your cluster. It is not uncommon to leave managers nodes in a drain state permanently. The following command will drain the node named manager:

$ docker node update --availability drain manager

manager

$ docker node ls

ID                           HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1i3wtacdjz5p509bu3l3qfbei   worker1   Ready   Active       Reachable

3f2t4wwwthgcahs9089b8przv   worker2   Ready   Active       Reachable

6d6e6kxlyjuo9vb9w1uug95zh * manager   Ready   Drain         Leader

Activating a node

When a node is ready to accept tasks again, set the state to active. Do not be concerned if the node does not immediately fill up with containers. Tasks are only assigned when a new scheduling event happens, such as starting a service. The following command will reactivate the worker2 node:

$ docker node update --availability active worker2

worker2

$ docker node ls

ID                           HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1i3wtacdjz5p509bu3l3qfbei   worker1   Ready   Active       Reachable

3f2t4wwwthgcahs9089b8przv   worker2   Ready   Active       Reachable

6d6e6kxlyjuo9vb9w1uug95zh * manager   Ready   Active       Leader

Removing nodes

Nodes may need to be removed for a variety of reasons including upgrades, failures, or simply eliminating capacity that is no longer needed. For example, it may be easier to upgrade nodes by building new ones rather than performing updates on the old nodes. The new node will be added and the old one removed.

Step one for a healthy node is to set the availability to drain. This will ensure that all scheduled tasks have been stopped and moved to other nodes in the swarm.

Step two is to run docker swarm leave from the node that will be leaving the swarm. This will assign the node a Down status. In the following example, worker2 has left the swarm:

$ docker node ls

ID                          HOSTNAME STATUS AVAILABILITY MANAGER STATUS

1i3wtacdjz5p509bu3l3qfbei   worker1   Ready   Active       Reachable

3f2t4wwwthgcahs9089b8przv   worker2   Down   Active      

6d6e6kxlyjuo9vb9w1uug95zh * manager   Ready   Active       Leader

If the node that is being removed is a manager, it must first be demoted to a worker as described earlier before running docker swarm leave. When removing managers, take care that you do not lose quorum or your cluster will stop working.

Once the node has been marked Down, it can be removed from the swarm. From a manager node, use docker node rm to remove the node from the swarm:

$ docker node rm worker2

In some cases, the node that you want to remove was unreachable so that it was not possible to run docker swarm leave. When that happens, use the –force option.

$ docker node rm --force worker2

Nodes that have been removed can be re-added with docker swarm join.

Backup and recovery

Just because Docker Swarm is resilient to failure does not mean that you should ignore backups. Good backups may be the difference between restoring services and a resume altering event. There are two major components to backup—the Swarm state data and your application data.

Backing up the Swarm

Each manager keeps the cluster information in /var/lib/docker/swarm/raft. If, for some reason, you need to completely rebuild your cluster, you will need this data. Make sure you have at least one good backup of the raft data. It does not matter which of the managers the backups are pulled from. It might be wise to pull backups from a couple of managers, just in case one is corrupted.

Recovering a Swarm

In most cases, losing a failed manager is an easy fix. Restart the failed manager and everything should be good. It may be necessary to build a new manager or promote an existing worker to a manager. In most circumstances, this will bring your cluster back into a healthy state.

If you lose enough managers to lose a quorum, recovery gets more complex. The first step is to start enough managers to restore quorum. The data should be synced to the new managers, and once quorum is recovered, so is the cluster. If that does not work, you will have to rebuild the swarm. This is where your backups come in.

If the manager node you choose to rebuild on has an otherwise healthy raft database, you can start there. If not, or if you are rebuilding on a brand new node, stop Docker and copy the raft data back to /var/lib/docker/swarm/raft. After the raft data is in place, ensure that Docker is running and run the following command:

$ docker swarm init --force-new-cluster --advertise-addr manager

The address set in with –advertise-addr has the same meaning as what was used to create the swarm initially. The magic here is the –force-new-cluster option. This option will ignore the swarm membership data that is the raft database, but will remember things such as the worker node list, running services, and tasks.

Backing up services

Service information is backed up as part of the raft database, but you should have a plan to rebuild them in case the database becomes corrupted. Backing up the output of docker swarm ls is a start. Application information, including networks, and volumes may be sourced for Docker Compose files, which should be backed up. The containers themselves and your application should be in version control and backed up.

Most importantly, do not forget your data. If you have your Docker files and the application code, the applications can be rebuilt even if the registry is lost. In some cases, it is a valid choice to not backup the registry since the images can, potentially, be rebuilt. The data, however, usually cannot be. Have a strategy in place for backing up the data that works for your environment. I suggest creating a container for each application that is deployed that can properly backup data for the application. Docker Swarm does not have a native scheduled task option, but you can configure cron on a worker node that runs the various backup containers on a schedule.

The pause availability option can be helpful here. Configure a worker node that will be your designated host to pull backups. Set the availability to pause so that other containers are not started on the node and resources are available to perform the backups. Using pause means that containers can be started in the background and will continue to run after the node is paused, allowing them to finish normally. Then cron can run a script that looks something like the following one. The contents of run-backup-containers is left as an exercise for the reader:

#!/bin/bash

docker node update --availability active backupworker

run-backup-containers

docker node update --availability pause

You can also label to designate multiple nodes for backups and schedule services to ignore those nodes, or in the case of backup containers, run on them.

Managing services

Now that the swarm is up and running, it is time to look at services. A service is a collection of one or more tasks that do something. Each task is a container running somewhere in the swarm. Since services are potentially composed of multiple tasks, there are different tools to manage them. In most cases, these commands will be a subcommand of docker service.

Running services

Running tasks with Docker Swarm is a little bit different than running them under plain Docker. Instead of using docker run, the command is docker service create:

$ docker service create --name web nginx

When a service starts, a swarm manager schedules the tasks to run on active workers in the swarm. By default, Swarm will spread running containers across all of the active hosts in the swarm.

Offering services to the Internet requires publishing ports with the -p flag. Multiple ports can be opened by specifying -p multiple times. When using the swarm overlay network, you also get ingress mesh routing. The mesh will route connections from any host in the cluster to the service no matter where it is running. Port publishing will also load balance across multiple containers:

$ docker service create --name web -p 80 -p 443 nginx

Creating replicas

A service can be started with multiple containers using the –replicas option. The value is the number of desired replicas. It may take a moment to start all the desired replicas:

$ docker service create --replicas 2 --name web -p 80 -p 443 nginx

This example starts two copies of the nginx container under the service name web. The great news is that you can change the number of replicas at any time:

$ docker service update --replicas 3 web

Even better, service can be scaled up or down. This example scales the service up to three containers. It is possible to scale the number down later once the replicas are no longer needed.

Use docker service ls to see a summary of running services and the number of replicas for each:

$ docker service ls

ID           NAME REPLICAS IMAGE COMMAND

4i3jsbsohkxj web   3/3       nginx

If you need to see the details of a service, including where tasks are running, use the docker service ps command. It takes the name of the service as an argument. This example shows three nginx tasks that are part of the service web:

$ docker service ps web

ID                         NAME   IMAGE NODE     DESIRED STATE CURRENT STATE

         ERROR                                                              

d993z8o6ex6wz00xtbrv647uq web.1 nginx worker2 Running       Running 31 minutes ago                                                                      

eueui4hw33eonsin9hfqgcvd7 web.2 nginx worker1 Running       Preparing 4 seconds ago                                                                      

djg5542upa1vq4z0ycz8blgfo web.3 nginx worker2 Running       Running 2 seconds ago   

Take note of the name of the tasks. If you were to connect to worker1 and try to use docker exec to access the web.2 container, you will get an error:

$ docker exec -it web.2 bash

Error response from daemon: No such container: web.2

Tasks started with the docker service are named with the name of the service, a number, and the ID of the task separated by dots. Using docker ps on worker1, you can see the actual name of the web.2 container:

$ docker ps

CONTAINER ID       IMAGE               COMMAND                CREATED      

     STATUS             PORTS               NAMES                            

b71ad831eb09       nginx:latest       "nginx -g 'daemon off"   2 days ago    

     Up 2 days           80/tcp, 443/tcp     web.2.eueui4hw33eonsin9hfqgcvd7

Running global services

There may be times when you want to run a task on every active node in the swarm. This is useful for monitoring tools:

$ docker service create --mode global --name monitor nginx

$ docker service ps monitor

ID                       NAME         IMAGE NODE     DESIRED STATE CURRENT S

TATE         ERROR                                                          

daxkqywp0y8bhip0f4ocpl5v1 monitor     nginx worker2 Running       Running 6

seconds ago                                                                  

a45opnrj3dcvz4skgwd8vamx8   _ monitor nginx worker1 Running       Running 4

seconds ago

It is important to reiterate that global services only run on active nodes. The task will not start on nodes that have the availability set to pause or drain. If a paused or drained node is set to active, the global service will be started on that node immediately.

$ docker node update --availability active manager

manager

$ docker service ps monitor

ID                         NAME         IMAGE NODE     DESIRED STATE CURRENT S

TATE           ERROR                                                        

0mpe2zb0mn3z6fa2ioybjhqr3 monitor     nginx manager Running       Preparing

3 seconds ago                                                                

daxkqywp0y8bhip0f4ocpl5v1   _ monitor nginx worker2 Running       Running 3

minutes ago                                                                  

a45opnrj3dcvz4skgwd8vamx8   _ monitor nginx worker1 Running       Running 3

minutes ago

Setting constraints

It is often useful to limit which nodes a service can run on. For example, a service that might be dependent on fast disk might be limited to nodes that have SSDs. Constraints are added to the docker service create command with the –constraint flag. Multiple constraints can be added. The result will be the intersection of all of the constraints.

For this example, assume that there exists a swarm with three nodes—manager, worker1, and worker2. The worker1 and worker2nodes have the env=prod label while manager has the env=dev label. If a service is started with the constraint that env is dev, it will only run service tasks on the manager node.

$ docker service create --constraint "node.labels.env == dev" --name web-dev --replicas 2 nginx                                  

913jm3v2ytrpxejpvtdkzrfjz

$ docker service ps web-dev

ID                         NAME       IMAGE NODE     DESIRED STATE CURRENT STATE         ERROR                                                            

5e93fl10k9x6kq013ffotd1wf web-dev.1 nginx manager Running       Running 6 seconds ago                                                                    

5skcigjackl6b8snpgtcjbu12 web-dev.2 nginx manager Running       Running 5 seconds ago

Even though there are two other nodes in the swarm, the service is only running on the manager because it is the only node with the env=dev label. If another service was started with the constraint that env is prod, the tasks will start on the worker nodes:

$ docker service create --constraint "node.labels.env == prod" --name web-prod --replicas 2 nginx                                  

88kfmfbwksklkhg92f4fkcpwx

$ docker service ps web-prod

ID                         NAME       IMAGE NODE     DESIRED STATE CURRENT STATE         ERROR                                                            

5f4s2536g0bmm99j7wc02s963 web-prod.1 nginx worker2 Running       Running 3 seconds ago                                                                  

5ogcsmv2bquwpbu1ndn4i9q65 web-prod.2 nginx worker1 Running       Running 3 seconds ago

The constraints will be honored if the services are scaled. No matter how many replicas are requested, the containers will only be run on nodes that match the constraints:

$ docker service update --replicas 3 web-prod

web-prod

$ docker service ps web-prod

ID                         NAME       IMAGE NODE     DESIRED STATE CURRENT STATE               ERROR                                                      

5f4s2536g0bmm99j7wc02s963 web-prod.1 nginx worker2 Running       Running about a minute ago                                                              

5ogcsmv2bquwpbu1ndn4i9q65 web-prod.2 nginx worker1 Running       Running about a minute ago                                                              

en15vh1d7819hag4xp1qkerae web-prod.3 nginx worker1 Running       Running 2 seconds ago

As you can see from the example, the containers are all running on worker1 and worker2. This leads to an important point. If the constraints cannot be satisfied, the service will be started but no containers will actually be running:

$ docker service create --constraint "node.labels.env == testing" --name web-test --replicas 2 nginx                              

6tfeocf8g4rwk8p5erno8nyia

$ docker service ls

ID           NAME     REPLICAS IMAGE COMMAND

6tfeocf8g4rw web-test 0/2       nginx

88kfmfbwkskl web-prod 3/3       nginx

913jm3v2ytrp web-dev   2/2       nginx

Notice that the number of replicas requested is two but the number of containers running is zero. Swarm cannot find a suitable node so it does not start the containers. If a node with the env=testing label were to be added or if that label were to be added to an existing node, swarm would immediately schedule the tasks:

$ docker node update --label-add env=testing worker1

worker1

$ docker service ps web-test

ID                         NAME       IMAGE NODE     DESIRED STATE CURRENT STATE           ERROR                                                          

7hsjs0q0pqlb6x68qos19o1b0 web-test.1 nginx worker1 Running       Running 18 seconds ago                                                                  

dqajwyqrah6zv83dsfqene3qa web-test.2 nginx worker1 Running       Running 18 seconds ago

In this example, the env label was changed to testing from prod on worker1. Since a node is now available that meets the constraints for the web-test service, swarm started the containers on worker1. However, the constraints are only checked when tasks are scheduled. Even though worker1 no longer has the env label set to prod, the existing containers for the web-prod service are still running:

$ docker node ps worker1

ID                         NAME       IMAGE NODE     DESIRED STATE CURRENT STATE          ERROR                                                          

7hsjs0q0pqlb6x68qos19o1b0 web-test.1 nginx worker1 Running       Running 5 minutes ago                                                                  

dqajwyqrah6zv83dsfqene3qa web-test.2 nginx worker1 Running       Running 5 minutes ago                                                                  

5ogcsmv2bquwpbu1ndn4i9q65 web-prod.2 nginx worker1 Running       Running 21 minutes ago                                                                  

en15vh1d7819hag4xp1qkerae web-prod.3 nginx worker1 Running       Running 20 minutes ago

Stopping services

All good things come to an end and this includes services running in a swarm. When a service is no longer needed, it can be removed with docker service rm. When a service is removed, all tasks associated with that service are stopped and the containers removed from the nodes they were running on. The following example removes a service named web:

$ docker service rm web

Docker makes the assumption that the only time services are stopped is when they are no longer needed. Because of this, there is no docker service analog to the docker stop command. This might not be an issue since services are so easily recreated. That said, I have run into situations where I have needed to stop a service for a short time for testing and did not have the command at my fingertips to recreate it.

The solution is very easy but not necessarily obvious. Rather than stopping the service and recreating it, set the number of replicas to zero. This will stop all running tasks and they will be ready to start up again when needed.

$ docker service update --replicas 0 web

web

$ docker service ls

ID           NAME REPLICAS IMAGE COMMAND

5gdgmb7afupd web   0/0       nginx

The containers for the tasks are stopped, but remain on the nodes. The docker service ps command will show that the tasks for the service are all in the Shutdown state. If needed, one can inspect the containers on the nodes:

$ docker service ps web

ID                         NAME   IMAGE NODE     DESIRED STATE CURRENT STATE           ERROR                                                                

68v6trav6cf2qj8gp8wgcbmhu web.1 nginx worker1 Shutdown       Shutdown 3 seconds ago                                                                      

060wax426mtqdx79g0ulwu25r web.2 nginx manager Shutdown       Shutdown 2 seconds ago                                                                      

79uidx4t5rz4o7an9wtya1456 web.3 nginx worker2 Shutdown       Shutdown 3 seconds ago

When it is time to bring the service back, use docker swarm update to set the number of replicas back to what is needed. Swarm will start the containers across in the swarm just as if you had used docker swarm create.

Upgrading a service with rolling updates

It is likely that services running in a swarm will need to be upgraded. Traditionally, upgrades involved stopping a service, performing the upgrade, then restarting the service. If everything goes well, the service starts and works as expected and the downtime is minimized. If not, there can be an extended outage as the administrator and developers debug what went wrong and restore the service.

Docker makes it easy to test new images before they are deployed and one can be confident that the service will work in production just as it did during testing. The question is, how does one deploy the upgraded service without a noticeable downtime for the users? For busy services, even a few seconds of downtime can be problematic. Docker Swarm provides a way to update services in the background with zero downtime.

There are three options which are passed to docker service create that control how rolling updates are applied. These options can also be changed after a service has been created with docker service update. The options are as follows:

  • –update-delay: This option sets the delay between each container upgrade. The delay is defined by a number of hours, minutes, and seconds indicated by a number followed by h, m, or s, respectively. For example, a 30 second delay will be written as 30s. A delay of 1 hour, 30 minutes, and 12 seconds will be written as 1h30m12s.
  • –update-failure-action: This tells swarm how to handle an upgrade failure. By default, Docker Swarm will pause the upgrade if a container fails to upgrade. You can configure swarm to continue even if a task fails to upgrade. The allowed values are pause and continue.
  • –update-parallelism: This tells swarm how many tasks to upgrade at one time. By default, Docker Swarm will only upgrade one task at a time. If this is set to 0 (zero), all running containers will be upgraded at once.

For this example, a service named web is started with six replicas using the nginx:1.10 image. The service is configured to update two tasks at a time and wait 30 seconds between updates. The list of tasks from docker service ps shows that all six tasks are running and that they are all running the nginx:1.10 image:

$ docker service create --name web --update-delay 30s --update-parallelism 2 --replicas 6 nginx:1.10

$ docker service ps web

ID                         NAME   IMAGE       NODE     DESIRED STATE CURRENT STATE           ERROR                                                            

83p4vi4ryw9x6kbevmplgrjp4 web.1 nginx:1.10 worker2 Running       Running 20 seconds ago                                                                  

2yb1tchas244tmnrpfyzik5jw web.2 nginx:1.10 worker1 Running       Running 20 seconds ago                                                                  

f4g3nayyx5y6k65x8n31x8klk web.3 nginx:1.10 worker2 Running       Running 20 seconds ago                                                                  

6axpogx5rqlg96bqt9qn822rx web.4 nginx:1.10 worker1 Running       Running 20 seconds ago                                                                  

2d7n5nhja0efka7qy2boke8l3 web.5 nginx:1.10 manager Running       Running 16 seconds ago                                                                  

5sprz723zv3z779o3zcyj28p1 web.6 nginx:1.10 manager Running       Running 16 seconds ago

Updates have started using the docker service update command and specifying a new image. In this case, the service will be upgraded from nginx:1.10 to nginx:1.11. Rolling updates work by stopping a number of tasks defined by –update-parallelism and starting new tasks based on the new image. Swarm will then wait until the delay set by –update-delay elapses before upgrading the next tasks:

$ docker service update --image nginx:1.11 web

When the updates begin, two tasks will be updated at a time. If the image is not found on the node, it will be pulled from the registry, slowing down the update. You can speed up the process by writing a script to pull the new image on each node before you run the update.

The update process can be monitored by running docker service inspect or docker service ps:

$ docker service inspect --pretty web              

ID:             4a60v04ux70qdf0fzyf3s93er

Name:           web

Mode:           Replicated

Replicas:     6

Update status:

State:         updating

Started:       about a minute ago

Message:       update in progress

Placement:

UpdateConfig:

Parallelism:   2

Delay:         30s

On failure:   pause

ContainerSpec:

Image:         nginx:1.11

Resources:

$ docker service ps web

ID                         NAME       IMAGE       NODE     DESIRED STATE CURRENT STATE                   ERROR                                              

83p4vi4ryw9x6kbevmplgrjp4 web.1     nginx:1.10 worker2 Running       Running 35 seconds ago                                                              

29qqk95xdrb0whcdy7abvji2p web.2     nginx:1.11 manager Running       Preparing 2 seconds ago                                                            

2yb1tchas244tmnrpfyzik5jw   _ web.2 nginx:1.10 worker1 Shutdown       Shutdown 2 seconds ago                                                              

f4g3nayyx5y6k65x8n31x8klk web.3     nginx:1.10 worker2 Running       Running 35 seconds ago                                                              

6axpogx5rqlg96bqt9qn822rx web.4     nginx:1.10 worker1 Running       Running 35 seconds ago                                                              

3z6ees2748tqsoacy114ol183 web.5     nginx:1.11 worker1 Running       Running less than a second ago                                                      

2d7n5nhja0efka7qy2boke8l3   _ web.5 nginx:1.10 manager Shutdown       Shutdown 1 seconds ago                                                              

5sprz723zv3z779o3zcyj28p1 web.6     nginx:1.10 manager Running       Running 30 seconds ago

As the upgrade starts, the web.2 and web.3tasks have been updated and are now running nginx:1.11. The others are still running the old version. Every 30 seconds, two more tasks will be upgraded until the entire service is running nginx:1.11.

Docker Swarm does not care about the version that is used on the image tag. This means that you can just as easily downgrade the service. In this case, if the docker service update –image nginx:1.10 web command were to be run after the upgrade, Swarm will go through the same update process until all tasks are running nginx:1.10. This can be very helpful if an upgrade does work as it was supposed to.

It also important to note that there is nothing that says that the new image has to be the same base as the old one. You can decide to run Apache instead of Nginx by running docker service update –image httpd web. Swarm will happily replace all the web tasks which were running an nginx image with one running the httpd image.

Rolling updates require that your service is able to run multiple containers in parallel. This may not work for some services, such as SQL databases. Some updates may require a schema change that is incompatible with the older version. In either case, rolling updates may not work for the service. In those cases, you can set –update-parallelism 0 to force all tasks to update at once or manually recreate the service. If you are running a lot of replicas, you should pre-pull your image to ease the load on your registry.

Summary

In this article, you have seen how to use Docker Swarm to orchestrate a Docker cluster. The same tools that are used with a single node can be used with a swarm. Additional tools available through swarm allow for easy scale out of services and rolling updates with little to no downtime.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here