5 min read

Welcome to Part 4 of a series that started with creating your very first Docker container and has gradually climbed the learning curve from deploying that container in the cloud to dealing with multiple container applications . If you haven’t started from the beginning of this series, I highly suggest you at least read the posts—if not follow along with the tutorial—to give context to this fourth installation in the series. Let’s start from where we left in Part 3, with a containerized Taiga instance. We want to deploy it in a more robust setup than a few containers running on a single host, and Docker has a couple of tools that will let us do this in concert with Compose, which was covered in Part 3.

I’ve updated the docker-taiga repo with a swarm branch, so go ahead and run a git pull if you’ve come here from Part 3. Running git checkout swarm will switch you to that branch and that’ll get you ready to follow along with the rest of the post, which will really just break down the deploy.sh shell script in the root of the application. If you’re impatient, have Virtualbox and Docker Machine installed, and have at least 4 GB of available RAM, go ahead and kick off the script to create a highly available cluster hosting the Taiga application on your very own machine. Of course, you’re more likely deploying this on a server or the cloud and you’ll need to modify the script to tell Docker Machine to use the driver for your virtualization platform of choice.

The two tools mentioned at the beginning of this series are Machine and Swarm, and we’ll look at them independently before diving into how they can be used in concert with Compose to automate a clustered application deployment.

Docker Machine

Despite the fact that I wrote the previous two posts with the unwritten assumption that you were following along on a Linux box, the market share and target audience dictates that you should be familiar with Docker Machine, since you’ve used it to install the Docker daemon on your Mac or PC running Windows. If you’re running Linux, installing it is as easy as installing Compose was in Part 3:

# curl -L https://github.com/docker/machine/releases/download/v0.6.0/docker-machine-`uname -s`-`uname -m` 
> /usr/local/bin/docker-machine && chmod +x /usr/local/bin/docker-machine

Docker Machine is far more useful than just giving you a local Virtualbox VM to run the Docker daemon on! It’s a full-fledged provisioning tool. In fact, you can provision a full Swarm cluster with Machine, fully automating your application deployments. Let’s first take a quick look at Swarm before we get back to harnessing the power of Machine to automate our Swarm deployments.

Docker Swarm

Docker Swarm takes a group of ‘nodes’ (VMs) and clusters them so that they behave like a single Docker host. There’s a lot of manual setup involved, including installing the Docker Engine on each node, opening a port on each node, and installing TLS certs to secure communication between the said nodes. The Swarm application itself is built to run in its own container, and creating a ‘cluster token’ to connect a new swarm cluster is as easy as running docker run swarm create.

At this point, if you’re smart, you’ve probably read this and said to yourself, “Now I have to learn some sort of configuration management software? I was just reading through this series to figure out how I could make my development life easier with Docker! I leave the Puppet/Chef/Ansible stuff to the sysadmins.” Don’t worry; Docker has your back. You can provision a Swarm with Machine!

Docker Machine + Swarm

To be fair, this isn’t as feature-rich or configurable as a true configuration management solution. The gist of using Machine is that you can use the –swarm flag in concert with –swarm-master and –swarm-discovery token://SWARM_CLUSTER_TOKEN and a unique HOST_NODE_NAME to automatically provision a Swarm Master with docker-machine create. You can do the same thing less the –swarm-master flag to provision the cluster nodes. Then you can use docker-machine env with –swarm HOST_NODE_NAME and a few environmental variables to tell the node where to get the TLS cert from and where to look for the Swarm Master.

Holistic Docker

This is largely experimental at this point. If you’re looking to do this in production with more than the most basic of tiered applications, stick around for the post on CoreOS and Kubernetes. If you absolutely love Docker, then you shouldn’t have to wait long for that first sentence to be wrong.

The basic workflow for a wholly-Docker deployment looks like this (and is demonstrated in deploy.sh):

  1. Containerize the layers of your application with Dockerfiles. Use those to build the images and push them to a registry.
  2. Provision a Swarm cluster using Docker Machine.
  3. Use Compose to deploy the containerized application to the cluster.

Important considerations (as of this writing): Don’t use the Swarm cluster to build the container images from the Dockerfiles. Have it pull a pre-built image from a registry. This means your compose file shouldn’t have a single ‘build’ entry. The updated docker-compose.yml does this by pulling the Taiga images from Docker Hub, but your own private application containers will need a private registry, either on Docker Hub or Google Cloud Platform (as demonstrated in Part 2) or elsewhere. Manually schedule services with multiple dependencies to ensure that such a service will have them all living on the same node. And explicitly map your volumes to ensure that you don’t get ‘port clash’.

Once again, a number of caveats for considerations that were beyond the scope of this post, but are necessary to a production Taiga deployment—the three caveats mentioned in Part 3 apply here as well. As mentioned in the beginning of this post, if you want to use that shell script for anything beyond testing, you’ll need to configure the Machine driver to use something other than Virtualbox. If you’ve stuck around thus far, stay tuned for the final part to this speed-climb up the containerization learning curve where I discuss the non-Docker deployment options.

About the Author

Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here