18 min read

In this article by Jay LaCroix, the author of the book Mastering Ubuntu Server, you will learn how there have been a great number of advancements in the IT space in the last several decades, and a few technologies have come along that have truly revolutionized the technology industry. The author is sure few would argue that the Internet itself is by far the most revolutionary technology to come around, but another technology that has created a paradigm shift in IT is virtualization. It evolved the way we maintain our data centers, allowing us to segregate workloads into many smaller machines being run from a single server or hypervisor. Since Ubuntu features the latest advancements of the Linux kernel, virtualization is actually built right in. After installing just a few packages, we can create virtual machines on our Ubuntu Server installation without the need for a pricey license agreement or support contract. In this article, Jay will walk you through creating, running, and managing Docker containers.

(For more resources related to this topic, see here.)

Creating, running, and managing Docker containers

Docker is a technology that seemed to come from nowhere and took the IT world by storm just a few years ago. The concept of containerization is not new, but Docker took this concept and made it very popular. The idea behind a container is that you can segregate an application you’d like to run from the rest of your system, keeping it sandboxed from the host operating system, while still being able to use the host’s CPU and memory resources. Unlike a virtual machine, a container doesn’t have a virtual CPU and memory of its own, as it shares resources with the host. This means that you will likely be able to run more containers on a server than virtual machines, since the resource utilization would be lower. In addition, you can store a container on a server and allow others within your organization to download a copy of it and run it locally. This is very useful for developers developing a new solution and would like others to test or run it. Since the Docker container contains everything the application needs to run, it’s very unlikely that a systematic difference between one machine or another will cause the application to behave differently.

The Docker server, also known as Hub, can be used remotely or locally. Normally, you’d pull down a container from the central Docker Hub instance, which will make various containers available, which are usually based on a Linux distribution or operating system. When you download it locally, you’ll be able to install packages within the container or make changes to its files, just as if it were a virtual machine. When you finish setting up your application within the container, you can upload it back to Docker Hub for others to benefit from or your own local Hub instance for your local staff members to use. In some cases, some developers even opt to make their software available to others in the form of containers rather than creating distribution-specific packages. Perhaps they find it easier to develop a container that can be used on every distribution than build separate packages for individual distributions.

Let’s go ahead and get started. To set up your server to run or manage Docker containers, simply install the docker.io package:

# apt-get install docker.io

Yes, that’s all there is to it. Installing Docker has definitely been the easiest thing we’ve done during this entire article. Ubuntu includes Docker in its default repositories, so it’s only a matter of installing this one package. You’ll now have a new service running on your machine, simply titled docker. You can inspect it with the systemctl command, as you would any other:

# systemctl status docker

Now that Docker is installed and running, let’s take it for a test drive. Having Docker installed gives us the docker command, which has various subcommands to perform different functions. Let’s try out docker search:

# docker search ubuntu

What we’re doing with this command is searching Docker Hub for available containers based on Ubuntu. You could search for containers based on other distributions, such as Fedora or CentOS, if you wanted. The command will return a list of Docker images available that meet your search criteria.

The search command was run as root. This is required, unless you make your own user account a member of the docker group. I recommend you do that and then log out and log in again. That way, you won’t need to use root anymore. From this point on, I won’t suggest using root for the remaining Docker examples. It’s up to you whether you want to set up your user account with the docker group or continue to run docker commands as root.

To pull down a docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command:

docker pull ubuntu

With this command, we’re pulling down the latest Ubuntu container image available on Docker Hub. The image will now be stored locally, and we’ll be able to create new containers from it. To create a new container from our downloaded image, this command will do the trick:

docker run -it ubuntu:latest /bin/bash

Once you run this command, you’ll notice that your shell prompt immediately changes. You’re now within a shell prompt from your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and so on. Go ahead and play around with the container, and then we’ll continue on with a bit more theory on how it actually works.

There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The most likely thing to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you’ve downloaded, you’re actually creating a container. Each time you use the docker run command, you’re not resuming the last container, but creating a new one. To see this in action, run a container with the docker run command provided earlier, and then type exit. Run it again, and then type exit again. You’ll notice that the prompt is different each time you run the command. After the root@ portion of the bash prompt within the container is a portion of a container ID. It’ll be different each time you execute the docker run command, since you’re creating a new container with a new ID each time.

To see the number of containers on your server, execute the docker info command. The first line of the output will tell you how many containers you have on your system, which should be the number of times you’ve run the docker run command. To see a list of all of these containers, execute the docker ps -a command:

docker ps -a

The output will give you the container ID of each container, the image it was created from, the command being run, when the container was created, its status, and any ports you may have forwarded. The output will also display a randomly generated name for each container, and these names are usually quite wacky. As I was going through the process of creating containers while writing this section, the codenames for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite hilarious.

The important output of the docker ps -a command is the container ID, the command, and the status. The ID allows you to reference a specific container. The command lets you know what command was run. In our example, we executed /bin/bash when we started our containers. Using the ID, we can resume a container. Simply execute the docker start command with the container ID right after. Your command will end up looking similar to the following:

docker start 353c6fe0be4d

The output will simply return the ID of the container and then drop you back to your shell prompt. Not the shell prompt of your container, but that of your server. You might be wondering at this point, then, how you get back to the shell prompt for the container. We can use docker attach for that:

docker attach 353c6fe0be4d

You should now be within a shell prompt inside your container. If you remember from earlier, when you type exit to disconnect from your container, the container stops. If you’d like to exit the container without stopping it, press CTRL + P and then CTRL + Q on your keyboard. You’ll return to your main shell prompt, but the container will still be running. You can see this for yourself by checking the status of your containers with the docker ps -a command.

However, while these keyboard shortcuts work to get you out of the container, it’s important to understand what a container is and what it isn’t. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. When you disconnect without a process running within the container, there’s no reason for it to run, since its namespace is empty. Thus, it stops. If you’d like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container, “run this process, and don’t stop running it until I tell you to.” Here’s an example of creating a container and running it in detached mode:

docker run -dit ubuntu /bin/bash

Normally, we use the -it options to create a container. This is what we used a few pages back. The -i option triggers interactive mode, while the -t option gives us a psuedo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background.

It may seem relatively useless to have another Bash shell running in the background that isn’t actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even run a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes this: how do you access the container’s instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let’s give this a try.

First, let’s create a new container in detached mode. Let’s also redirect port 80 within the container to port 8080 on the host:

docker run -dit -p 8080:80 ubuntu /bin/bash

The command will output a container ID. This ID will be much longer than you’re accustomed to seeing, because when we run docker ps -a, it only shows shortened container IDs. You don’t need to use the entire container ID when you attach; you can simply use part of it, so long as it’s long enough to be different from other IDs—like this:

docker attach dfb3e

Here, I’ve attached to a container with an ID that begins with dfb3e. I’m now attached to a Bash shell within the container.

Let’s install Apache. We’ve done this before, but to keep it simple, just install the apache2 package within your container, we don’t need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Apache should now be installed within the container. In my tests, the apache2 daemon wasn’t automatically started as it would’ve been on a real server instance. Since the latest container available on Docker Hub for Ubuntu hasn’t yet been upgraded to 16.04 at the time of writing this (it’s currently 14.04), the systemctl command won’t work, so we’ll need to use the legacy start command for Apache:

# /etc/init.d/apache2 start

We can similarly check the status, to make sure it’s running:

# /etc/init.d/apache2 status

Apache should be running within the container. Now, press CTRL + P and then CTRL + Q to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default “It works!” page that comes with Apache. Congratulations, you’re officially running an application within a container!

Before we continue, think for a moment of all the use cases you can use Docker for. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. I’ll give you a personal example. At a previous job, I worked with some embedded Linux software engineers, who each had their preferred Linux distribution to run on their workstation computers. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. For developers, this poses a problem—the build tools are different in each distribution, because they all ship different versions of all development packages. The application they developed was only known to compile in Debian, and newer versions of the GNU Compiler Collection (GCC) compiler posed a problem for the application. My solution was to provide each developer a Docker container based on Debian, with all the build tools baked in that they needed to perform their job. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. I’m sure there are some clever use cases you can come up with.

Anyway, back to our Apache container: it’s now running happily in the background, responding to HTTP requests over port 8080 on the host. But, what should we do with it at this point? One thing we can do is create our own image from it. Before we do, we should configure Apache to automatically start when the container is started. We’ll do this a bit differently inside the container than we would on an actual Ubuntu server. Attach to the container, and open the /etc/bash.bashrc file in a text editor within the container. Add the following to the very end of the file:

/etc/init.d/apache2 start

Save the file, and exit your editor. Exit the container with the CTRL + P and CTRL + Q key combinations. We can now create a new image of the container with the docker commit command:

docker commit <Container ID> ubuntu:apache-server

This command will return to us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We’ll first see a column for the repository the image came from. In our case, it’s Ubuntu. Next, we can see the tag. Our original Ubuntu image (the one we used docker pull command to download) has a tag of latest. We didn’t specify that when we first downloaded it, it just defaulted to latest. In addition, we see an image ID for both, as well as the size.

To create a new container from our new image, we just need to use docker run but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn’t been stopped:

docker run -dit -p 8080:80 ubuntu:apache-server /bin/bash

Speaking of stopping a container, I should probably show you how to do that as well. As you could probably guess, the command is docker stop followed by a container ID. This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn’t stop on its own after a delay:

docker stop <Container ID>

To remove a container, issue the docker rm command followed by a container ID. Normally, this will not remove a running container, but it will if you add the -f option. You can remove more than one docker container at a time by adding additional container IDs to the command, with a space separating each. Keep in mind that you’ll lose any unsaved changes within your container if you haven’t committed the container to an image yet:

docker rm <Container ID>

The docker rm command will not remove images. If you want to remove a docker image, use the docker rmi command followed by an image ID. You can run the docker image command to view images stored on your server, so you can easily fetch the ID of the image you want to remove. You can also use the repository and tag name, such as ubuntu:apache-server, instead of the image ID. If the image is in use, you can force its removal with the -f option:

docker rmi <Image ID>

Before we conclude our look into Docker, there’s another related concept you’ll definitely want to check out: Dockerfiles. A Dockerfile is a neat way of automating the building of docker images, by creating a text file with a set of instructions for their creation. The easiest way to set up a Dockerfile is to create a directory, preferably with a descriptive name for the image you’d like to create (you can name it whatever you wish, though) and inside it create a file named Dockerfile. Following is a sample—copy this text into your Dockerfile and we’ll look at how it works:

FROM ubuntu
MAINTAINER Jay <[email protected]>

# Update the container's packages
RUN apt-get update; apt-get dist-upgrade

# Install apache2 and vim
RUN apt-get install -y apache2 vim

# Make Apache automatically start-up`
RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc

Let’s go through this Dockerfile line by line to get a better understanding of what it’s doing:

FROM ubuntu

We need an image to base our new image on, so we’re using Ubuntu as a base. This will cause Docker to download the ubuntu:latest image from Docker Hub if we don’t already have it downloaded:

MAINTAINER Jay <[email protected]>

Here, we’re setting the maintainer of the image. Basically, we’re declaring its author:

# Update the container's packages

Lines beginning with a hash symbol (#) are ignored, so we are able to create comments within the Dockerfile. This is recommended to give others a good idea of what your Dockerfile does:

RUN apt-get update; apt-get dist-upgrade -y

With the RUN command, we’re telling Docker to run a specific command while the image is being created. In this case, we’re updating the image’s repository index and performing a full package update to ensure the resulting image is as fresh as can be. The -y option is provided to suppress any requests for confirmation while the command runs:

RUN apt-get install -y apache2 vim

Next, we’re installing both apache2 and vim. The vim package isn’t required, but I personally like to make sure all of my servers and containers have it installed. I mainly included it here to show you that you can install multiple packages in one line:

RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc

Earlier, we copied the startup command for the apache2 daemon into the /etc/bash.bashrc file. We’re including that here so that we won’t have to do this ourselves when containers are crated from the image.

To build the image, we can use the docker build command, which can be executed from within the directory that contains the Dockerfile. What follows is an example of using the docker build command to create an image tagged packt:apache-server:

docker build -t packt:apache-server

Once you run this command, you’ll see Docker create the image for you, running each of the commands you asked it to. The image will be set up just the way you like. Basically, we just automated the entire creation of the Apache container we used as an example in this section. Once this is complete, we can create a container from our new image:

docker run -dit -p 8080:80 packt:apache-server /bin/bash

Almost immediately after running the container, the sample Apache site will be available on the host. With a Dockerfile, you’ll be able to automate the creation of your Docker images. There’s much more you can do with Dockerfiles though; feel free to peruse Docker’s official documentation to learn more.

Summary

In this article, we took a look at virtualization as well as containerization. We began by walking through the installation of KVM as well as all the configuration required to get our virtualization server up and running. We also took a look at Docker, which is a great way of virtualizing individual applications rather than entire servers. We installed Docker on our server, and we walked through managing containers by pulling down an image from Docker Hub, customizing our own images, and creating Dockerfiles to automate the deployment of Docker images. We also went over many of the popular Docker commands to manage our containers.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here