A Development Workflow with Docker

7 min read

In this post, we’re going to explore the sacred developer workflow, and how we can leverage modern technologies to craft a very opinionated and trendy setup.

As such, a topic might involve a lot of personal tastes, so we will mostly focus on ideas that have the potential to increase developer happiness, productivity and software quality. The tools used in this article made my life easier, but feel free to pick what you like and swap what you don’t with your own arsenal.

While it is a good idea to stick with mature tools and seriously learn how to master them, you should keep an open mind and periodically monitor what’s new. Software development evolves at an intense pace and smart people regularly come up with new projects that can help us to be better at what we do.

To keep things concrete and challenge our hypothesizes, we’re going to develop a development tool. Our small command line application will manage the creation, listing and destruction of project tickets. We will write it in node.js to enjoy a scripting language, a very large ecosystem and a nice integration with yeoman. This last reason foreshadows future features and probably a post about them.

Code Setup

The code has been tested under Ubuntu 14.10, io.js version 1.8.1 and npm version 2.8.3. As this post focuses on the workflow, rather than on the code, I’ll keep everything as simple as possible and assume you have a basic knowledge of docker and developing with node.

Now let’s build the basic structure of a new node project.

code/ ➜ tree
├── package.json
├── bin
│   └── iago.js
├── lib
│   └── notebook.js
└── test
   ├── mocha.opts
   └── notebook.js

Some details:

  • bin/iago.js is the command line entry point.
  • lib/notebook.js exports the methods to interact with tickets.
  • test/ uses mocha and chai for unit-testing.
  • package.json provides information on the project:
"description":"Ticker management",

Build Automation

As TDD advocates, let’s start with a failing test.

// test/notebook.js

# Mocha - the fun, simple, flexible JavaScript test framework
# Chai - Assertion Library
var expect = require('chai').expect;
var notebook = require('../lib/notebook');

describe('new note', function() {

beforeEach(function(done) {
   // Reset the database, used to store tickets, after each test, to keep them independent

it('should be empty', function() {


In order to run it, we first need to install node, npm, mocha and chai. Ideally, we share same software versions as the rest of the team, on the same OS. Hopefully, it won’t collapse with other projects we might develop on the same machine and the production environment is exactly the same.

Or we could use docker and don’t bother.

$ docker run -it --rm  # start a new container, automatically removed once done
   --volume $PWD:/app  # make our code available from within the container
   --workdir /app  # set default working dir in project's root
   iojs  # use official io.js image
   npm install --save-dev mocha chai # install test libraries and save it in package.json

This one-liner install mocha and chai locally in node_modules/. With nothing more than docker installed, we can now run tests.

$ docker run -it --rm --volume $PWD:/app --workdir /app iojs node_modules/.bin/mocha

Having dependencies bundled along with the project let us use the stack container as is. This approach extends to other languages remarkably : ruby has Bundle and Go has Godep.

Let’s make the test pass with the following implementation of our notebook.

/*jslint node: true */
'use strict';

var path = require('path');
# Flat JSON file database built on lodash API
var low = require('lowdb');
# Pretty unicode tables for the CLI withNode.JS
var table = require('cli-table');

* Storage with sane defaults
* @param{string} dbPath - Flat (json) file Lowdb will use
* @param{string} dbName - Lowdb database name
functiondb(dbPath, dbName) {
dbPath = dbPath || process.env.HOME + '/.iago.json';
dbName = dbName || 'notebook';
console.log('using', dbPath, 'storage');

module.exports = {

backend: db(),

write: function(title, content, owner, labels) {
   var note = {
     meta: {
       project: path.basename(process.cwd()),
       date: newDate(),
       status: 'created',
       owner: owner,
       labels: labels,
     title: title,
     ticket: content,

   console.log('writing new note:', title);

list: function() {
   var i = 0;
   var grid = newtable({head:['title', 'note', 'author', 'date']});
   var dump = db().cloneDeep();
   for (; i < dump.length; i++) {
       dump[i].title, dump[i].ticket,
       dump[i].meta.author, dump[i].meta.date

done: function(title) {
   var notes = db().remove({title: title});
   console.log('note', notes[0].title, 'removed');

Again we install dependencies and re-run tests.

# Install lowdb and cli-table locally
docker run -it --rm --volume $PWD:/app --workdir /app iojs npm install lowdb cli-table

# Successful tests
docker run -it --rm --volume $PWD:/app --workdir /app iojs node_modules/.bin/mocha

To sum up, so far:

  • The iojs container gives us a consistent node stack.
  • When mapping the code as a volume and bundling the dependencies locally, we can run tests or execute anything.

In the second part, we will try to automate the process and integrate those ideas smoothly in our workflow.

Coding Environment

Containers provide a consistent way to package environments and distribute them. This is ideal to setup a development machine and share it with the team / world. The following Dockerfile builds such an artifact:

# Save it as provision/Dockerfile

FROM ruby:latest

RUN apt-get update && apt-get install -y tmux vim zsh

RUN gem install tmuxinator

# Inject development configuration
ADD workspace.yml /root/.tmuxinator/workspace.yml

ENTRYPOINT ["tmuxinator"]
CMD ["start", "workspace"]

Tmux is a popular terminal multiplexer and tmuxinator let us easily control how to organize and navigate terminal windows. The configuration thereafter setup a single window split in three :

  • The main pane where we can move around and edit files
  • The test pane where tests continuously run on file changes
  • The repl pane with a running interpreter
# Save as provision/workspace.yml
name: workspace
# We find the same code path as earlier
root: /app

     layout: main-vertical
       - zsh
       # Watch files and rerun tests
       - docker exec -it code_worker_1 node_modules/.bin/mocha --watch
         # In case worker container is still bootstraping
         - sleep 3
         - docker exec -it code_worker_1 node

Let’s dig what’s behind docker exec -it code_worker_1 node_modules/.bin/mocha –watch.

Workflow Deployment

This command supposes an iojs container, named code_worker_1, is running. So we have two containers to orchestrate and docker compose is a very elegant solution for that.

The configuration file below describes how to run them.

# This container have the necessary tech stack
image: iojs
working_dir: /app
# Just hang around
# The other container will be in charge to run interesting commands
command:"while true; do echo hello world; sleep 10; done"

# This one is our development environment
# Build the dockerfile we described earlier
build: ./provision
# Make docker client available within the container
# Make the code available within the container
- worker
stdin_open: true
tty: true

Yaml gives us a very declarative expression of our machines. Let’s infuse some life in them.

$ # Run in detach mode
$ docker-compose up -d
$ # ...
$ docker-compose ps
     Name                   Command           State
code_worker_1     while true; do echo hello w   Up
code_workspace_1   tmuxinator start workspace   Up

The code stack and the development environment are ready. We can reach them with docker attach code_workspace_1, and find a tmux session as configured above, with tests and repl in place.

Once done, ctrl-p + ctrl-q to detach the session from the container, and docker-compose stop to stop both machines. Next time we’ll develop on this project a simple docker-compose up -d will bring us back the entire stack and our favorite tools.

What’s Next

We combined a lot of tools, but most of them uses configuration files we can tweak. Actually, this is the very basics of a really promising reflection. Indeed, we could easily consider more sophisticated development environments, with personal dotfiles and a better provisioning system. This is also true for the stack container, which could be dedicated to android code and run on a powerful 16GB RAM remote server.

Containers unlock new potential for deployment, but also for development. The consistency those technologies bring on the table should encourage best practices, automation and help us write more reliable code, faster.


Courtesy of xkcd

About the author

Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.


Please enter your comment!
Please enter your name here