Home Tutorials Network and Data Management for Containers

Network and Data Management for Containers

0
1337
14 min read

In this article by Neependra Khare author of the book Docker Cookbook, when the Docker daemon starts, it creates a virtual Ethernet bridge with the name docker0. For example, we will see the following with the ip addr command on the system that runs the Docker daemon:

Docker Cookbook

Learn Programming & Development with a Packt Subscription

(For more resources related to this topic, see here.)

As we can see, docker0 has the IP address 172.17.42.1/16. Docker randomly chooses an address and subnet from a private range defined in RFC 1918 (https://tools.ietf.org/html/rfc1918). Using this bridged interface, containers can communicate with each other and with the host system.

By default, every time Docker starts a container, it creates a pair of virtual interfaces, one end of which is attached to the host system and other end to the created container. Let’s start a container and see what happens:

Docker Cookbook

The end that is attached to the eth0 interface of the container gets the 172.17.0.1/16 IP address. We also see the following entry for the other end of the interface on the host system:

Docker Cookbook

Now, let’s create a few more containers and look at the docker0 bridge with the brctl command, which manages Ethernet bridges:

Docker Cookbook

Every veth* binds to the docker0 bridge, which creates a virtual subnet shared between the host and every Docker container. Apart from setting up the docker0 bridge, Docker creates IPtables NAT rules, such that all containers can talk to the external world by default but not the other way around. Let’s look at the NAT rules on the Docker host:

Docker Cookbook

If we try to connect to the external world from a container, we will have to go through the Docker bridge that was created by default:

Docker Cookbook

When starting a container, we have a few modes to select its networking:

  • –net=bridge: This is the default mode that we just saw. So, the preceding command that we used to start the container can be written as follows:
    $ docker run -i -t --net=bridge centos /bin/bash
  • –net=host: With this option, Docker does not create a network namespace for the container; instead, the container will network stack with the host. So, we can start the container with this option as follows:
    $ docker run -i -t --net=host centos bash

    We can then run the ip addr command within the container as seen here:

    Docker Cookbook

    We can see all the network devices attached to the host. An example of using such a configuration is to run the nginx reverse proxy within a container to serve the web applications running on the host.

  • –net=container:NAME_or_ID: With this option, Docker does not create a new network namespace while starting the container but shares it from another container. Let’s start the first container and look for its IP address:
    $ docker run -i -t --name=centos centos bash

    Docker Cookbook

    Now start another as follows:

    $ docker run -i -t --net=container:centos ubuntu bash

    Docker Cookbook

    As we can see, both containers contain the same IP address.

    Containers in a Kubernetes (http://kubernetes.io/) Pod use this trick to connect with each other.

  • –net=none: With this option, Docker creates the network namespace inside the container but does not configure networking.

    For more information about the different networking, visit https://docs.docker.com/articles/networking/#how-docker-networks-a-container.

From Docker 1.2 onwards, it is also possible to change /etc/host, /etc/hostname, and /etc/resolv.conf on a running container. However, note that these are just used to run a container. If it restarts, we will have to make the changes again.

So far, we have looked at networking on a single host, but in the real world, we would like to connect multiple hosts and have a container from one host to talk to a container from another host. Flannel (https://github.com/coreos/flannel), Weave (https://github.com/weaveworks/weave), Calio (http://www.projectcalico.org/getting-started/docker/), and Socketplane (http://socketplane.io/) are some solutions that offer this functionality Socketplane joined Docker Inc in March ’15.

Community and Docker are building a Container Network Model (CNM) with libnetwork (https://github.com/docker/libnetwork), which provides a native Go implementation to connect containers. More information on this development can be found at http://blog.docker.com/2015/04/docker-networking-takes-a-step-in-the-right-direction-2/.

Accessing containers from outside

Once the container is up, we would like to access it from outside. If you have started the container with the –net=host option, then it can be accessed through the Docker host IP. With –net=none, you can attach the network interface from the public end or through other complex settings. Let’s see what happens in by default—where packets are forwarded from the host network interface to the container.

Getting ready

Make sure the Docker daemon is running on the host and you can connect through the Docker client.

How to do it…

  1. Let’s start a container with the -P option:
    $ docker run --expose 80 -i -d -P --name f20 fedora /bin/bash

    Docker Cookbook

    This automatically maps any network port of the container to a random high port of the Docker host between 49000 to 49900.

    In the PORTS section, we see 0.0.0.0:49159->80/tcp, which is of the following form:

    <Host Interface>:<Host Port> -> <Container Interface>/<protocol>

    So, in case any request comes on port 49159 from any interface on the Docker host, the request will be forwarded to port 80 of the centos1 container.

    We can also map a specific port of the container to the specific port of the host using the -p option:

    $ docker run -i -d -p 5000:22 --name centos2 centos /bin/bash

    Docker Cookbook

In this case, all requests coming on port 5000 from any interface on the Docker host will be forwarded to port 22 of the centos2 container.

How it works…

With the default configuration, Docker sets up the firewall rule to forward the connection from the host to the container and enables IP forwarding on the Docker host:

Docker Cookbook

As we can see from the preceding example, a DNAT rule has been set up to forward all traffic on port 5000 of the host to port 22 of the container.

There’s more…

By default, with the -p option, Docker will forward all the requests coming to any interface to the host. To bind to a specific interface, we can specify something like the following:

$ docker run -i -d -p 192.168.1.10:5000:22 --name f20 fedora /bin/bash

In this case, only requests coming to port 5000 on the interface that has the IP 192.168.1.10 on the Docker host will be forwarded to port 22 of the f20 container. To map port 22 of the container to the dynamic port of the host, we can run following command:

$ docker run -i -d -p 192.168.1.10::22 --name f20 fedora /bin/bash

We can bind multiple ports on containers to ports on hosts as follows:

$ docker run -d -i -p 5000:22 -p 8080:80 --name f20 fedora /bin/bash

We can look up the public-facing port that is mapped to the container’s port as follows:

$ docker port f20 80
0.0.0.0:8080

To look at all the network settings of a container, we can run the following command:

$ docker inspect   -f "{{ .NetworkSettings }}" f20 

See also

Managing data in containers

Any uncommitted data or changes in containers get lost as soon as containers are deleted. For example, if you have configured the Docker registry in a container and pushed some images, as soon as the registry container is deleted, all of those images will get lost if you have not committed them. Even if you commit, it is not the best practice. We should try to keep containers as light as possible. The following are two primary ways to manage data with Docker:

  • Data volumes: From the Docker documentation (https://docs.docker.com/userguide/dockervolumes/), a data volume is a specially-designated directory within one or more containers that bypasses the Union filesystem to provide several useful features for persistent or shared data:
    • Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that data is copied into the new volume.
    • Data volumes can be shared and reused between containers.
    • Changes to a data volume are made directly.
    • Changes to a data volume will not be included when you update an image.
    • Volumes persist until no containers use them.
  • Data volume containers: As a volume persists until no container uses it, we can use the volume to share persistent data between containers. So, we can create a named volume container and mount the data to another container.

Getting ready

Make sure that the Docker daemon is running on the host and you can connect through the Docker client.

How to do it…

  1. Add a data volume. With the -v option with the docker run command, we add a data volume to the container:
    $ docker run -t -d -P -v /data --name f20 fedora /bin/bash

    We can have multiple data volumes within a container, which can be created by adding -v multiple times:

    $ docker run -t -d -P -v /data -v /logs --name f20 fedora /bin/bash

    The VOLUME instruction can be used in a Dockerfile to add data volume as well by adding something similar to VOLUME [“/data”].

    We can use the inspect command to look at the data volume details of a container:

    $ docker inspect -f "{{ .Config.Volumes }}" f20
    $ docker inspect -f "{{ .Volumes }}" f20

    Docker Cookbook

    If the target directory is not there within the container, it will be created.

  2. Next, we mount a host directory as a data volume. We can also map a host directory to a data volume with the -v option:
    $ docker run -i -t -v /source_on_host:/destination_on_container fedora /bin/bash

    Consider the following example:

    $ docker run -i -t -v /srv:/mnt/code fedora /bin/bash

    This can be very useful in cases such as testing code in different environments, collecting logs in central locations, and so on. We can also map the host directory in read-only mode as follows:

    $ docker run -i -t -v /srv:/mnt/code:ro fedora /bin/bash

    We can also mount the entire root filesystem of the host within the container with the following command:

    $ docker run -i -t -v /:/host:ro fedora /bin/bash

    If the directory on the host (/srv) does not exist, then it will be created, given that you have permission to create one. Also, on the Docker host where SELinux is enabled and if the Docker daemon is configured to use SELinux (docker -d –selinux-enabled), you will see the permission denied error if you try to access files on mounted volumes until you relabel them. To relabel them, use either of the following commands:

    $ docker run -i -t -v /srv:/mnt/code:z fedora /bin/bash
    $ docker run -i -t -v /srv:/mnt/code:Z fedora /bin/bash
  3. Now, create a data volume container. While sharing the host directory to a container through volume, we are binding the container to a given host, which is not good. Also, the storage in this case is not controlled by Docker. So, in cases when we want data to be persisted even if we update the containers, we can get help from data volume containers. Data volume containers are used to create a volume and nothing else; they do not even run. As the created volume is attached to a container (not running), it cannot be deleted. For example, here’s a named data container:
    $ docker run -d -v /data --name data fedora echo "data volume container"

    This will just create a volume that will be mapped to a directory managed by Docker. Now, other containers can mount the volume from the data container using the –volumes-from option as follows:

    $ docker run -d -i -t --volumes-from data --name client1 fedora /bin/bash

    We can mount a volume from the data volume container to multiple containers:

    $ docker run -d -i -t --volumes-from data --name client2 fedora /bin/bash

    Docker Cookbook

    We can also use –volumes-from multiple times to get the data volumes from multiple containers. We can also create a chain by mounting volumes from the container that mounts from some other container.

How it works…

In case of data volume, when the host directory is not shared, Docker creates a directory within /var/lib/docker/ and then shares it with other containers.

There’s more…

  • Volumes are deleted with -v flag to docker rm, only if no other container is using it. If some other container is using the volume, then the container will be removed (with docker rm) but the volume will not be removed.
  • The Docker registry, which by default starts with the dev flavor. In this registry, uploaded images were saved in the /tmp/registry folder within the container we started. We can mount a directory from the host at /tmp/registry within the registry container, so whenever we upload an image, it will be saved on the host that is running the Docker registry. So, to start the container, we run the following command:
    $ docker run -v /srv:/tmp/registry -p 5000:5000 registry

    To push an image, we run the following command:

    $ docker push registry-host:5000/nkhare/f20

    After the image is successfully pushed, we can look at the content of the directory that we mounted within the Docker registry. In our case, we should see a directory structure as follows:

    /srv/
    ├── images 
    │   ├── 3f2fed40e4b0941403cd928b6b94e0fd236dfc54656c00e456747093d10157ac 
    │   │   ├── ancestry 
    │   │   ├── _checksum 
    │   │   ├── json 
    │   │   └── layer 
    │   ├── 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 
    │   │   ├── ancestry 
    │   │   ├── _checksum 
    │   │   ├── json 
    │   │   └── layer 
    │   ├── 53263a18c28e1e54a8d7666cb835e9fa6a4b7b17385d46a7afe55bc5a7c1994c 
    │   │   ├── ancestry 
    │   │   ├── _checksum 
    │   │   ├── json 
    │   │   └── layer 
    │   └── fd241224e9cf32f33a7332346a4f2ea39c4d5087b76392c1ac5490bf2ec55b68 
    │       ├── ancestry 
    │       ├── _checksum 
    │       ├── json 
    │       └── layer 
    ├── repositories 
    │   └── nkhare 
    │       └── f20 
    │           ├── _index_images 
    │           ├── json 
    │           ├── tag_latest 
    │           └── taglatest_json 
    

See also

Linking two or more containers

With containerization, we would like to create our stack by running services on different containers and then linking them together. However, we can also put them in different containers and link them together. Container linking creates a parent-child relationship between them, in which the parent can see selected information of its children. Linking relies on the naming of containers.

Getting ready

Make sure the Docker daemon is running on the host and you can connect through the Docker client.

How to do it…

  1. Create a named container called centos_server:
    $ docker run -d -i -t --name centos_server centos /bin/bash

    Docker Cookbook

  2. Now, let’s start another container with the name client and link it with the centos_server container using the –link option, which takes the name:alias argument. Then look at the /etc/hosts file:
    $ docker run -i -t --link centos_server:server --name client fedora /bin/bash

    Docker Cookbook

How it works…

In the preceding example, we linked the centos_server container to the client container with an alias server. By linking the two containers, an entry of the first container, which is centos_server in this case, is added to the /etc/hosts file in the client container. Also, an environment variable called SERVER_NAME is set within the client to refer to the server.

Docker Cookbook

There’s more…

Now, let’s create a mysql container:

$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql

Then, let’s link it from a client and check the environment variables:

$ docker run -i -t --link mysql:mysql-server --name client fedora /bin/bash

Docker Cookbook

Also, let’s look at the docker ps output:

Docker Cookbook

If you look closely, we did not specify the -P or -p options to map ports between two containers while starting the client container. Depending on the ports exposed by a container, Docker creates an internal secure tunnel in the containers that links to it. And, to do that, Docker sets environment variables within the linker container. In the preceding case, mysql is the linked container and client is the linker container. As the mysql container exposes port 3306, we see corresponding environment variables (MYSQL_SERVER_*) within the client container.

As linking depends on the name of the container, if you want to reuse a name, you must delete the old container.

See also

Summary

In this article, we learned how to connect a container with another container, in the external world. We also learned how we can share external storage from other containers and the host system.

Resources for Article:


Further resources on this subject:


NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here