4 min read

This is the final installation of a five-part series that began with a guide to container creation [link to part 1] and cloud deployment [link to part 2] and progressed to application layering [link to part 3] and deploying tiered applications to a cluster [link to part 4]. This series will now wrap up by focusing less on the Docker ecosystem to look at other clustered deployment options.

This is largely beyond the scope of a series of posts focused on Docker. However, while working on this blog post series, I also worked with CoreOS. The beauty of CoreOS is that it fully integrates with the core Docker Engine (though as of this writing, RKT, their own container runtime, was just declared production ready. However, I’m sure they’ll be pushing users in this direction as development continues).

The CoreOS ecosystem consists of these primary components:

  • CoreOS: This is the base operating system
  • etcd: This is the distributed key-value store
  • fleet: This is the coordinating systemd across the cluster
  • flannel: This connects it all together

There’s a lot more than this, but those are the heavy-hitting aspects of the distro. Let’s take a look at each of the parts individually in the context of the Taiga application used in Parts 3 and 4. I’ve once again branched the docker-taiga repo, so run git pull to ensure that you’re working with the latest material. Then, run git checkout coreos, and you’ll be ready to follow along. Like part four, there’s a deploy.sh script in the application root, and this time, if you’re running Linux with KVM/libvirt and at least 4 GB of available RAM, then you can kick it off and watch the magic of a highly available Taiga cluster launch on your very own computer.


CoreOS is a Gentoo-based Linux distro designed for one purpose: container deployments. While it can be run in a solo instance, it really shines in a cluster and integrates with a host of associated projects to make this happen. There’s not much of interest in the OS itself, besides that it’s deployed in a way that is different from what you’re likely used to if you come from the Red Hat / Ubuntu server world. The machine is provisioned with a YAML file called a cloud-config and can be booted up in a number of different ways, from your favorite cloud provider’s SDK to a simple install script.


etcd is a simple key-value store (think Redis) specifically tailored for use in CoreOS. It’s meant to house the data needed across the entirety of the system—database connection information, feature flags, and so on—as well as handle the system coordination itself, including determining the leader machine / container election. It’s a versatile tool, and for a full use case outside the one demonstrated in deploy.sh, I highly recommend looking at CoreOS’s list of projects using etcd.


The fleet tool is probably my favorite of the coordinating distributed systemd bunch. The fleet tool is where CoreOS shines in it’s ease of achieving high availability because it lets you use the now-familiar unit files of systemd to schedule processes on one, some, or all of the machines in a cluster. Perhaps its simplest application is in running containers, and that’s only scratching the surface in what it can be used to accomplish. One of the cooler use cases is a simplified service discovery, which helps the cluster stay coordinated as well as making it scalable. A form of the sidekick method is demonstrated in some of the unit files that deploy.sh references.


Looking at flannel is where I started getting uncomfortable, and then I was immediately assuaged. In a previous job I had to manage firewalls and switches, and at no point did my pretend network engineer alter ego feel comfortable troubleshooting network issues. I used to joke that my job was watching Cisco SmartNet do it for me, but flannel takes all that and simplifies it, running an agent on each node that controls the subnet for containers on this host and records it in etcd to ensure the configuration isn’t lost in case this node goes down for whatever reason. Port mapping then becomes the trivial process of letting flannel route it for you given a minimal amount of configuration.

Further reading

Of course, we covered all this without even touching on Kubernetes, which uses some of the CoreOS toolset itself. In fact, the two pair pretty well together, but you could write a whole book on Kubernetes alone. If you want to read more about Kubernetes, start with this excellent (if brief) guest blog on CoreOS’s website about integrating Ansible with CoreOS deployments. After this, feel free to reference this post and the series as a whole as you descend into the rabbit hole that is highly available, containerized, and tiered applications. Good luck!

About the Author

Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.


Please enter your comment!
Please enter your name here