29 min read

In this article by Jon Langemak, the author of the book Docker Networking Cookbook, has covered the following recipes:

  • Verifying host based DNS configuration inside a container
  • Overriding the default name resolution settings
  • Configuring links for name and service resolution
  • Leveraging Docker DNS

(For more resources related to this topic, see here.)

Verifying host based DNS configuration inside a container

While you might not realize it but Docker, by default, is providing your containers a means to do basic name resolution. Docker passes name resolution from the Docker host, directly into the container. The result is that a spawned container can natively resolve anything that the Docker host itself can. The mechanics used by Docker to achieve name resolution in a container are elegantly simple. In this recipe, we’ll walk through how this is done and how you can verify that it’s working as expected.

Getting Ready

In this recipe we’ll be demonstrating the configuration on a single Docker host. It is assumed that this host has Docker installed and that Docker is in its default configuration. We’ll be altering name resolution settings on the host so you’ll need root level access.

How to do it…

To start with, let’s start a new container on our host docker1 and examine how the container handles name resolution:

user@docker1:~$ docker run -d -P --name=web8 
jonlangemak/web_server_8_dns
d65baf205669c871d1216dc091edd1452a318b6522388e045c211344815c280a
user@docker1:~$
user@docker1:~$ docker exec web8 host www.google.com
www.google.com has address 216.58.216.196
www.google.com has IPv6 address 2607:f8b0:4009:80e::2004 
user@docker1:~ $

It would appear that the container has the ability to resolve DNS names. If we look at our local Docker host and run the same test, we should get similar results:

user@docker1:~$ host www.google.com
www.google.com has address 216.58.216.196
www.google.com has IPv6 address 2607:f8b0:4009:80e::2004
user@docker1:~$ 

In addition, just like our Docker host, the container can also resolve local DNS records associated with the local domain lab.lab:

user@docker1:~$ docker exec web8 host docker4
docker4.lab.lab has address 192.168.50.102
user@docker1:~$

You’ll notice that we didn’t need to specify a fully qualified domain name in order to resolve the host name docker4 in the domain lab.lab. At this point it’s safe to assume that the container is receiving some sort of intelligent update from the Docker host which provides it relevant information about the local DNS configuration.

In case you don’t know, the resolv.conf file is generally where you define a Linux system’s name resolution parameters. In many cases it is altered automatically by configuration information in other places. However – regardless of how it’s altered, it should always be the source of truth for how the system handles name resolution.

To see what the container is receiving, let’s examine the containers resolv.conf file:

user@docker1:~$ docker exec -t web8 more /etc/resolv.conf
::::::::::::::
/etc/resolv.conf
::::::::::::::
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.20.30.13
search lab.lab
user@docker1:~$

As you can see, the container has learned that the local DNS server is 10.20.30.13 and that the local DNS search domain is lab.lab. Where did it get this information? The solution is rather simple. When a container starts, Docker generates instances of the following three files for each container spawned and saves it with the container configuration:

  • /etc/hostname
  • /etc/hosts
  • /etc/resolv.conf

These files are stored as part of the container configuration and then mounted into the container. We can use findmnt tool from within the container to examine the source of the mounts:

root@docker1:~# docker exec web8 findmnt -o SOURCE
…<Additional output removed for brevity>…
/dev/mapper/docker1--vg-root[/var/lib/docker/containers/c803f130b7a2450609672c23762bce3499dec9abcfdc540a43a7eb560adaf62a/resolv.conf
/dev/mapper/docker1--vg-root[/var/lib/docker/containers/c803f130b7a2450609672c23762bce3499dec9abcfdc540a43a7eb560adaf62a/hostname]
/dev/mapper/docker1--vg-root[/var/lib/docker/containers/c803f130b7a2450609672c23762bce3499dec9abcfdc540a43a7eb560adaf62a/hosts]
root@docker1:~#

So while the container thinks it has local copies of the hostname, hosts, and resolv.conf, file in its /etc/ directory, the real files are actually located in the containers configuration directory (/var/lib/docker/containers/) on the Docker host.

When you tell Docker to run a container, it does 3 things:

  • It examines the Docker hosts /etc/resolv.conf file and places a copy of it in the containers directory.
  • It creates a hostname file in the containers directory and assigns the container a unique hostname.
  • It creates a hosts file in the containers directory and adds relevant records including localhost and a record referencing the host itself.

Each time the container is restarted, the container’s resolv.conf file is updated based on the values found in the Docker hosts resolv.conf file. This means that any changes made to the resolv.conf file are lost each time the container is restarted. The hostname and hosts configuration files are also rewritten each time the container is restarted losing any changes made during the previous run.

To validate the configuration files a given container is using we can inspect the containers configuration for these variables:

user@docker1:~$ docker inspect web8 | grep HostsPath
"HostsPath": "/var/lib/docker/containers/c803f130b7a2450609672c23762bce3499dec9abcfdc540a43a7eb560adaf62a/hosts",
user@docker1:~$ docker inspect web8 | grep HostnamePath
"HostnamePath": "/var/lib/docker/containers/c803f130b7a2450609672c23762bce3499dec9abcfdc540a43a7eb560adaf62a/hostname",
user@docker1:~$ docker inspect web8 | grep ResolvConfPath
"ResolvConfPath": "/var/lib/docker/containers/c803f130b7a2450609672c23762bce3499dec9abcfdc540a43a7eb560adaf62a/resolv.conf",
user@docker1:~$ 

As expected, these are the same mount paths we saw when we ran the findmnt command from within the container itself. These represent the exact mount path for each file into the containers /etc/ directory for each respective file.

Overriding the default name resolution settings

The method Docker uses for providing name resolution to containers works very well in most cases. However, there could be some instances where you want Docker to provide the containers with a DNS server other than the one the Docker host is configured to use. In these cases, Docker offers you a couple of options. You can tell the Docker service to provide a different DNS server for all of the containers the service spawns. You can also manually override this setting at container runtime by providing a DNS server as an option to the docker run subcommand. In this recipe, we’ll show you your options for changing the default name resolution behavior as well as how to verify the settings worked.

Getting Ready

In this recipe we’ll be demonstrating the configuration on a single Docker host. It is assumed that this host has Docker installed and that Docker is in its default configuration. We’ll be altering name resolution settings on the host so you’ll need root level access.

How to do it…

As we saw in the first recipe in this article, by default, Docker provides containers with the DNS server that the Docker host itself uses. This comes in the form of copying the host’s resolv.conf file and providing it to each spawned container. Along with the name server setting, this file also includes definitions for DNS search domains. Both of these options can be configured at the service level to cover any spawned containers as well as at the individual level.

For the purpose of comparison, let’s start by examining the Docker hosts DNS configuration:

root@docker1:~# more /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.20.30.13
search lab.lab
root@docker1:~#

With this configuration, we would expect that any container spawned on this host would receive the same name server and DNS search domain. Let’s spawn a container called web8 to verify this is working as expected:

root@docker1:~# docker run -d -P --name=web8 
jonlangemak/web_server_8_dns
156bc29d28a98e2fbccffc1352ec390bdc8b9b40b84e4c5f58cbebed6fb63474
root@docker1:~#
root@docker1:~# docker exec -t web8 more /etc/resolv.conf
::::::::::::::
/etc/resolv.conf
::::::::::::::
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.20.30.13
search lab.lab

As expected, the container receives the same configuration. Let’s now inspect the container and see if we see any DNS related options defined:

user@docker1:~$ docker inspect web8 | grep Dns
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
user@docker1:~$

Since we’re using the default configuration, there is no reason to configure anything specific within the container in regards to DNS server or search domain. Each time the container starts, Docker will apply the settings for the hosts resolv.conf file to the containers DNS configuration files.

If we’d prefer to have Docker give containers a different DNS server or DNS search domain, we can do so through Docker options. In this case, the two we’re interested in are:

  • –dns=<DNS Server> – Specify a DNS server address that Docker should provide to the containers.
  • –dns-search=<DNS Search Domain> – Specify a DNS search domain that Docker should provide to the containers.

Let’s configure Docker to provide containers with a public DNS server (4.2.2.2) and a search domain of lab.external. We can do so by passing the following options to the Docker systemd drop-in file:

ExecStart=/usr/bin/dockerd --dns=4.2.2.2 --dns-search=lab.external

Once the options are configured, reload the systemd configuration, restart the service to load the new options, and restart our container web8:

user@docker1:~$ sudo systemctl daemon-reload
user@docker1:~$ sudo systemctl restart docker
user@docker1:~$ docker start web8
web8
user@docker1:~$ docker exec -t web8 more /etc/resolv.conf
search lab.external
nameserver 4.2.2.2
user@docker1:~$

You’ll note that despite this container initially having the hosts DNS server (10.20.30.13) and search domain (lab.lab) it now has the service level DNS options we just specified. If you recall earlier, we saw that when we inspected this container, it didn’t define a specific DNS server or search domain. Since none was specified, Docker now uses the settings from the Docker options which take priority. While this provides some level of flexibility, it’s not yet truly flexible. At this point any and all containers spawned on this server will be provided the same DNS server and search domain. To be truly flexible we should be able to have Docker alter the name resolution configuration on a per container level. As luck would have it, these options can also be provided directly at container runtime.

The preceding image defines the priority Docker uses when deciding what name resolution settings to apply to a container when it’s started. Settings defined at container runtime always take priority. If the settings aren’t defined there, Docker then looks to see if they are configured at the service level. If the settings aren’t there, it falls back to the default method of relying on the Docker hosts DNS settings.

For instance, we can launch a container called web2 and provide different options:

root@docker1:~# docker run -d --dns=8.8.8.8 --dns-search=lab.dmz 
-P --name=web8-2 jonlangemak/web_server_8_dns
1e46d66a47b89d541fa6b022a84d702974414925f5e2dd56eeb840c2aed4880f
root@docker1:~#

If we inspect the container, we’ll see that we now have dns and dns-search fields defined as part of the container configuration:

root@docker1:~# docker inspect web8-2
…<output removed for brevity>…
            "Dns": [
                "8.8.8.8"
            ],
            "DnsOptions": [],
            "DnsSearch": [
                "lab.dmz"
            ],
…<output removed for brevity>…
root@docker1:~# 

This ensures that if the container is restarted, it will still have the same DNS settings that were initially provided the first time the container was run. Let’s make some slight changes to the Docker service to verify the priority is working as expected. Let’s change our Docker options to look like this:

ExecStart=/usr/bin/dockerd --dns-search=lab.external

Now restart the service and run the following container:

user@docker1:~$ sudo systemctl daemon-reload
user@docker1:~$ sudo systemctl restart docker
root@docker1:~#
root@docker1:~# docker run -d -P --name=web8-3 
jonlangemak/web_server_8_dns
5e380f8da17a410eaf41b772fde4e955d113d10e2794512cd20aa5e551d9b24c
root@docker1:~#

Since we didn’t provide any DNS related options at container run time the next place we’d check would be the service level options. Our Docker service level options include a DNS search domain of lab.external, we’d expect the container to receive that search domain. However, since we don’t have a DNS server defined, we’ll need to fall back to the one configured on the Docker host itself.

And now examine its resolv.conf file to make sure things worked as expected:

user@docker1:~$ docker exec -t web8-3 more /etc/resolv.conf
search lab.external
nameserver 10.20.30.13
user@docker1:~$

Configuring Links for name and service resolution

Container linking provides a means for one container to easily communicate with another container on the same host. As we’ve seen in previous examples, most container to container communication has occurred through IP addresses. Container linking improves on this by allowing linked containers to communicate with each other by name. In addition to providing basic name resolution, it also provides a means to see what services a linked container is providing. In this recipe we’ll review how to create container links as well as discuss some of their limitations.

Getting Ready

In this recipe we’ll be demonstrating the configuration on a single Docker host. It is assumed that this host has Docker installed and that Docker is in its default configuration. We’ll be altering name resolution settings on the host so you’ll need root level access.

How to do it…

The phrase container linking might imply to some that it involves some kind of network configuration or modification. In reality, container linking has very little to do with container networking. In the default mode, container linking provides a means for one container to resolve the name of another. For instance, let’s start two containers on our lab host docker1:

root@docker1:~# docker run -d -P --name=web1 jonlangemak/web_server_1
88f9c862966874247c8e2ba90c18ac673828b5faac93ff08090adc070f6d2922 root@docker1:~# docker run -d -P --name=web2 --link=web1 
jonlangemak/web_server_2
00066ea46367c07fc73f73bdcdff043bd4c2ac1d898f4354020cbcfefd408449 root@docker1:~#

Notice how when I started the second container I used a new flag called –link and referenced the container web1. We would now say that web2 was linked to web1. However, they’re not really linked in any sort of way. A better description might be to say that web2 is now aware of web1. Let’s connect to the container web2 to show you what I mean:

root@docker1:~# docker exec -it web2 /bin/bash
root@00066ea46367:/# ping web1 -c 2
PING web1 (172.17.0.2): 48 data bytes
56 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.163 ms
56 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.092 ms
--- web1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.092/0.128/0.163/0.036 ms
root@00066ea46367:/#

It appears that the web2 container is now able to resolve the container web1 by name. This is because the linking process inserted records into the web2 containers hosts file:

root@00066ea46367:/# more /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      web1 88f9c8629668
172.17.0.3      00066ea46367
root@00066ea46367:/#

With this configuration, the web2 container can reach the web1 container either by the name we gave the container at runtime (web1) or the unique hostname Docker generated for the container (88f9c8629668).

In addition to the hosts file being updated, web2 also generates some new environmental variables:

root@00066ea46367:/# printenv
WEB1_ENV_APACHE_LOG_DIR=/var/log/apache2
HOSTNAME=00066ea46367
APACHE_RUN_USER=www-data
WEB1_PORT_80_TCP=tcp://172.17.0.2:80
WEB1_PORT_80_TCP_PORT=80
LS_COLORS=
WEB1_PORT=tcp://172.17.0.2:80
WEB1_ENV_APACHE_RUN_GROUP=www-data
APACHE_LOG_DIR=/var/log/apache2
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
WEB1_PORT_80_TCP_PROTO=tcp
APACHE_RUN_GROUP=www-data
SHLVL=1
HOME=/root
WEB1_PORT_80_TCP_ADDR=172.17.0.2
WEB1_ENV_APACHE_RUN_USER=www-data
WEB1_NAME=/web2/web1
_=/usr/bin/printenv
root@00066ea46367:/# 

You’ll notice many new environmental variables. Docker will copy any environmental variables from the linked container that were defined as part of the container. This includes:

  • Environmental variables described in the docker image. More specifically, any ENV variables from the images Dockerfile
  • Environmental variables passed to the container at runtime through the –env or -e flag.

In this case, these three variables were defined as ENV variables in the images Dockerfile:

APACHE_RUN_USER=www-data
APACHE_RUN_GROUP=www-data
APACHE_LOG_DIR=/var/log/apache2

Since both container images have the same ENV variables defined we’ll see the local variables as well as the same environmental variables from the container web1 prefixed with WEB1_ENV_:

WEB1_ENV_APACHE_RUN_USER=www-data
WEB1_ENV_APACHE_RUN_GROUP=www-data
WEB1_ENV_APACHE_LOG_DIR=/var/log/apache2

In addition, Docker also created 6 other environmental variables that describe the web1 container as well as any of its exposed ports:

WEB1_PORT=tcp://172.17.0.2:80
WEB1_PORT_80_TCP=tcp://172.17.0.2:80
WEB1_PORT_80_TCP_ADDR=172.17.0.2
WEB1_PORT_80_TCP_PORT=80
WEB1_PORT_80_TCP_PROTO=tcp
WEB1_NAME=/web2/web1

Linking also allows you to specify aliases. For instance let’s stop, remove, and respawn container web2 using a slightly different syntax for linking…

user@docker1:~$ docker stop web2
web2
user@docker1:~$ docker rm web2
web2
user@docker1:~$ docker run -d -P --name=web2 --link=web1:webserver 
jonlangemak/web_server_2
e102fe52f8a08a02b01329605dcada3005208d9d63acea257b8d99b3ef78e71b
user@docker1:~$

Notice that after the link definition we inserted a :webserver. The name after the colon represents the alias for the link. In this case, I’ve specified an alias for the container web1 as webserver.

If we examine the web2 container, we’ll see that the alias is now also listed in the hosts file:

root@c258c7a0884d:/# more /etc/hosts
…<Additional output removed for brevity>… 
172.17.0.2      webserver 88f9c8629668 web1
172.17.0.3      c258c7a0884d
root@c258c7a0884d:/# 

Aliases also impact the environmental variables created during linking. Rather than using the container name they’ll instead use the alias:

user@docker1:~$ docker exec web2 printenv
…<Additional output removed for brevity>… 
WEBSERVER_PORT_80_TCP_ADDR=172.17.0.2
WEBSERVER_PORT_80_TCP_PORT=80
WEBSERVER_PORT_80_TCP_PROTO=tcp
…<Additional output removed for brevity>… 
user@docker1:~$

At this point you might be wondering how dynamic this is. After all, Docker is providing this functionality by updating static files in each container. What happens if a container’s IP address changes? For instance, let’s stop the container web1 and start a new container called web3 using the same image:

user@docker1:~$ docker stop web1
web1
user@docker1:~$ docker run -d -P --name=web3 jonlangemak/web_server_1
69fa80be8b113a079e19ca05c8be9e18eec97b7bbb871b700da4482770482715
user@docker1:~$

If you’ll recall from earlier, the container web1 had an IP address of 172.17.0.2 allocated to it. Since I stopped the container, Docker will release that IP address reservation making it available to be reassigned to the next container we start. Let’s check the IP address assigned to the container web3:

user@docker1:~$ docker exec web3 ip addr show dev eth0
79: eth0@if80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$

As expected, web3 took the now open IP address of 172.17.0.2 that previously belonged to the web1 container. We can also verify that the container web2 still believes that this IP address belongs to the web1 container:

user@docker1:~$ docker exec –t web2 more /etc/hosts | grep 172.17.0.2
172.17.0.2      webserver 88f9c8629668 web1
user@docker1:~$

If we start the container web1 once again, we should see it will get a new IP address allocated to it:

user@docker1:~$ docker start web1
web1
user@docker1:~$ docker exec web1 ip addr show dev eth0
81: eth0@if82: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.4/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:4/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$

If we check the container web2 again, we should see that Docker has updated it to reference web1’s new IP address…

user@docker1:~$ docker exec web2 more /etc/hosts | grep web1
172.17.0.4      webserver 88f9c8629668 web1
user@docker1:~$

However, while Docker takes care of updating the host file with the new IP address, it will not take care of updating any of the environmental variables to reflect the new IP address:

user@docker1:~$ docker exec web2 printenv
…<Additional output removed for brevity>… 
WEBSERVER_PORT=tcp://172.17.0.2:80
WEBSERVER_PORT_80_TCP=tcp://172.17.0.2:80
WEBSERVER_PORT_80_TCP_ADDR=172.17.0.2
…<Additional output removed for brevity>… 
user@docker1:~$

Additionally it should be pointed out that the link is only one way. That is, this link does not cause the container web1 to become aware of the web2 container. The container web1 will not receive the host records or the environmental variables referencing the web2:

user@docker1:~$ docker exec -it web1 ping web2
ping: unknown host
user@docker1:~$

Another reason to provision links is when you use Docker Inter Container Connectivity (ICC) mode set to false. As we’ve discussed previously, ICC prevents any containers on the same bridge from talking directly to each other. This forces them to talk to each other only though published ports. Linking provides a mechanism to override the default ICC rules. To demonstrate, let’s stop and remove all the containers on our host docker1 and then add the following Docker option to the systemd drop in file:

ExecStart=/usr/bin/dockerd --icc=false

Now reload the systemd configuration, restart the service, and start the following containers:

docker run -d -P --name=web1 jonlangemak/web_server_1
docker run -d -P --name=web2 jonlangemak/web_server_2

With ICC mode on you’ll notice containers can’t talk directly to each other:

user@docker1:~$ docker exec web1 ip addr show dev eth0
87: eth0@if88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$ docker exec -it web2 curl http://172.17.0.2
user@docker1:~$

In the preceding example, web2 is not able to access the web servers on web1. Now let’s delete and recreate the web2 container this time linking it to web1:

user@docker1:~$ docker stop web2
web2
user@docker1:~$ docker rm web2
web2
user@docker1:~$ docker run -d -P --name=web2 --link=web1 
jonlangemak/web_server_2
4c77916bb08dfc586105cee7ae328c30828e25fcec1df55f8adba8545cbb2d30
user@docker1:~$ docker exec -it web2 curl http://172.17.0.2
<body>
<html>
<h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span></h1>
</body>
</html>
user@docker1:~$

We can see with the link in place the communication is allowed as expected. Once again, just like the link, this access is allowed in one direction.

It should be noted that linking works differently when using user defined networks. In this recipe we covered what are now being called legacy links. Linking with user defined networks will be covered in the following recipes.

Leveraging Docker DNS

The introduction of user defined networks signaled a big change in Docker networking. While the ability to provision custom networks was the big news, there were also major enhancements in name resolution. User defined networks can benefit from what’s being called embedded DNS. The Docker engine itself now has the ability to provide name resolution to all of the containers. This is a marked improvement from the legacy solution where the only means for name resolution was external DNS or linking which relied on the hosts file. In this recipe, we’ll walk through how to use and configure embedded DNS.

Getting Ready

In this recipe we’ll be demonstrating the configuration on a single Docker host. It is assumed that this host has Docker installed and that Docker is in its default configuration. We’ll be altering name resolution settings on the host so you’ll need root level access.

How to do it…

As mentioned, the embedded DNS system only works on user defined Docker networks. That being said, let’s provision a user defined network and then start a simple container on it:

user@docker1:~$ docker network create -d bridge mybridge1
0d75f46594eb2df57304cf3a2b55890fbf4b47058c8e43a0a99f64e4ede98f5f
user@docker1:~$ docker run -d -P --name=web1 --net=mybridge1 
jonlangemak/web_server_1
3a65d84a16331a5a84dbed4ec29d9b6042dde5649c37bc160bfe0b5662ad7d65
user@docker1:~$

As we saw in an earlier recipe, by default, Docker pulls the name resolution configuration from the Docker host and provides it to the container. This behavior can be changed by providing different DNS servers or search domains either at the service level or at container run time. In the case of containers connected to a user-defined network, the DNS settings provided to the container are slightly different. For instance, let’s look at the resolv.conf file for the container we just connected to the user defined bridge mybridge1:

user@docker1:~$ docker exec -t web1 more /etc/resolv.conf
search lab.lab
nameserver 127.0.0.11
options ndots:0
user@docker1:~$ 

Notice how the name server for this container is now 127.0.0.11. This IP address represents Docker’s embedded DNS server and will be used for any container which is connected to a user-defined network. It is a requirement that any container connected to a user-defined use the embedded DNS server.

Containers not initially started on a user defined network will get updated the moment they connect to a user defined network. For instance, let’s start another container called web2 but have it use the default docker0 bridge:

user@docker1:~$ docker run -dP --name=web2 jonlangemak/web_server_2
d0c414477881f03efac26392ffbdfb6f32914597a0a7ba578474606d5825df3f
user@docker1:~$ docker exec -t web2 more /etc/resolv.conf
::::::::::::::
/etc/resolv.conf
::::::::::::::
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.20.30.13
search lab.lab
user@docker1:~$

If we now connect the web2 container to our user-defined network, Docker will update the name server to reflect the embedded DNS server:

user@docker1:~$ docker network connect mybridge1 web2
user@docker1:~$ docker exec -t web2 more /etc/resolv.conf
search lab.lab
nameserver 127.0.0.11
options ndots:0
user@docker1:~$ 

Since both our containers are now connected to the same user-defined network they can now reach each other by name:

user@docker1:~$ docker exec -t web1 ping web2 -c 2
PING web2 (172.18.0.3): 48 data bytes
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.107 ms
56 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.087 ms
--- web2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.087/0.097/0.107/0.000 ms

user@docker1:~$ docker exec -t web2 ping web1 -c 2
PING web1 (172.18.0.2): 48 data bytes
56 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.060 ms
56 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.119 ms
--- web1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.060/0.089/0.119/0.030 ms
user@docker1:~$

You’ll note that the name resolution is bidirectional and works inherently without the use of any links. That being said, with user defined networks, we can still define links for the purpose of creating local aliases. For instance, let’s stop and remove both containers web1 and web2 and reprovision them as follows:

user@docker1:~$ docker run -d -P --name=web1 --net=mybridge1 
--link=web2:thesecondserver jonlangemak/web_server_1
fd21c53def0c2255fc20991fef25766db9e072c2bd503c7adf21a1bd9e0c8a0a
user@docker1:~$ docker run -d -P --name=web2 --net=mybridge1 
--link=web1:thefirstserver jonlangemak/web_server_2
6e8f6ab4dec7110774029abbd69df40c84f67bcb6a38a633e0a9faffb5bf625e
user@docker1:~$

The first interesting item to point out is that Docker let us link to a container that did not yet exist. When we ran the container web1 we asked Docker to link it to the container web2. At that point, web2 didn’t exist. This is a notable difference in how links work with the embedded DNS server. In legacy linking Docker needed to know the target containers information prior to making the link. This was because it had to manually update the source containers host file and environmental variables. The second interesting item is that aliases are no longer listed in the containers hosts file. If we look at the hosts file on each container we’ll see that the linking no longer generates entries:

user@docker1:~$ docker exec -t web1 more /etc/resolv.conf
search lab.lab
nameserver 127.0.0.11
options ndots:0
user@docker1:~$ docker exec -t web2 more /etc/resolv.conf
search lab.lab
nameserver 127.0.0.11
options ndots:0
user@docker1:~$ 

All of the resolution is now occurring in the embedded DNS server. This includes keeping track of defined aliases and their scope. So even without host records, each container is able to resolve the other containers alias through the embedded DNS server:

user@docker1:~$ docker exec -t web1 ping thesecondserver -c2
PING thesecondserver (172.18.0.3): 48 data bytes
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.067 ms
56 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.067 ms
--- thesecondserver ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.067/0.067/0.067/0.000 ms

user@docker1:~$ docker exec -t web2 ping thefirstserver -c 2
PING thefirstserver (172.18.0.2): 48 data bytes
56 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=0.062 ms
56 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.042 ms
--- thefirstserver ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.042/0.052/0.062/0.000 ms
user@docker1:~$

The aliases created have a scope that is local to the container itself. For instance, a third container on the same user defined network is not able to resolve the aliases created as part of the links:

user@docker1:~$ docker run -d -P --name=web3 --net=mybridge1 
jonlangemak/web_server_1
d039722a155b5d0a702818ce4292270f30061b928e05740d80bb0c9cb50dd64f
user@docker1:~$ docker exec -it web3 ping thefirstserver -c 2
ping: unknown host
user@docker1:~$ docker exec -it web3 ping thesecondserver -c 2
ping: unknown host
user@docker1:~$

You’ll recall that legacy linking also automatically created a set of environmental variables on the source container. These environmental variables referenced the target container and any ports it might be exposing. Linking in user defined networks does not create these environmental variables:

user@docker1:~$ docker exec web1 printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=4eba77b66d60
APACHE_RUN_USER=www-data
APACHE_RUN_GROUP=www-data
APACHE_LOG_DIR=/var/log/apache2
HOME=/root
user@docker1:~$ 

As we saw in the previous recipe, keeping these variables up to date wasn’t achievable even with legacy links. That being said, it’s not a total surprise the functionality doesn’t exist when dealing with user defined networks.

In addition to providing local container resolution, the embedded DNS server also handles any external requests. As we saw in the preceding example, the search domain from the Docker host (lab.lab in my case) was still being passed down to the containers and configured in their resolv.conf file. The name server learned from the host becomes a forwarder for the embedded DNS server. This allows the embedded DNS server to process any container name resolution requests and hand off external requests to the name server used by the Docker host. This behavior can be overridden either at the service level or by passing the –dns or –dns-search flag to a container at run time. For instance, we can start two more instances of the web1 container and specify a specific DNS server in either case:

user@docker1:~$ docker run -dP --net=mybridge1 --name=web4 
--dns=10.20.30.13 jonlangemak/web_server_1
19e157b46373d24ca5bbd3684107a41f22dea53c91e91e2b0d8404e4f2ccfd68
user@docker1:~$ docker run -dP --net=mybridge1 --name=web5 
--dns=8.8.8.8 jonlangemak/web_server_1
700f8ac4e7a20204100c8f0f48710e0aab8ac0f05b86f057b04b1bbfe8141c26
user@docker1:~$

Note that web4 would receive 10.20.30.13 as a DNS forwarder even if we didn’t specify it explicitly. This is because that’s also the DNS server used by the Docker host and when not specified the container inherits from the host. It is specified here for the sake of the example.

Now if we try to resolve a local DNS record on either container we can see that in the case of web1 it works since it has the local DNS server defined whereas the lookup on web2 fails because 8.8.8.8 doesn’t know about the lab.lab domain:

user@docker1:~$ docker exec -it web4 ping docker1.lab.lab -c 2
PING docker1.lab.lab (10.10.10.101): 48 data bytes
56 bytes from 10.10.10.101: icmp_seq=0 ttl=64 time=0.080 ms
56 bytes from 10.10.10.101: icmp_seq=1 ttl=64 time=0.078 ms
--- docker1.lab.lab ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.078/0.079/0.080/0.000 ms

user@docker1:~$ docker exec -it web5 ping docker1.lab.lab -c 2
ping: unknown host
user@docker1:~$

Summary

In this article we discussed the available options for container name resolution. This includes both the default name resolution behavior as well as the new embedded DNS server functionality that exists with user defined networks. You will get hold on the process used to determine name server assignment under each of these scenarios.

Resources for Article:

Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here