Installing and running LXC

0
3183
21 min read

In this article by Konstantin Ivanov, the author of the book Containerization with LXC, we will see how to install and run LXC. LXC takes advantage of the kernel namespaces and cgroups to create process isolation we often refer to as containers. As such LXC is not a separate software component in the Linux kernel, but rather a set of userspace tools, the liblxc library and various language bindings.

In this article, we are going to cover the following topics:

  • Installing LXC on Ubuntu
  • Building and starting containers using the provided templates and configuration files
  • Showcase the various LXC operations

(For more resources related to this topic, see here.)

Installing LXC

At the time of writing there are two long-term support versions of LXC – 1.0 and 2.0. The userspace tools that they provide have some minor differences in command line flags and deprecations that I’ll point out as we use them.


Installing LXC on Ubuntu with apt

Let’s start by installing LXC 1.0 on Ubuntu 14.04 Trusty:

  1. Install the main LXC package, tooling and dependencies:
    [email protected]:~# lsb_release -dc
    
    Description:     Ubuntu 14.04.5 LTS
    
    Codename:       trusty
    
    [email protected]:~# apt-get –y install -y lxc bridge-utils debootstrap libcap-dev cgroup-bin libpam-systemd bridge-utils
    
        [email protected]:~#
  2. The package versions that Trusty provides at this time is 1.0.8:
    [email protected]:~# dpkg --list | grep lxc | awk '{print $2,$3}'
    
    liblxc1      1.0.8-0ubuntu0.3
    
    lxc          1.0.8-0ubuntu0.3
    
    lxc-templates 1.0.8-0ubuntu0.3
    
    python3-lxc  1.0.8-0ubuntu0.3
    
    [email protected]:~#

To install LXC 2.0 we’ll need the backports repository:

  1. Add the following two lines in the apt sources file:
    [email protected]:~# vim /etc/apt/sources.list
    
    deb http://archive.ubuntu.com/ubuntu trusty-backports main restricted universe multiverse
    
        deb-src http://archive.ubuntu.com/ubuntu trusty-backports main restricted universe multiverse
  2. Resynchronize the package index files from their sources:
    [email protected]:~# apt-get update
  3. Install the main LXC package, tooling and dependencies:
    [email protected]:~# apt-get –y install -y lxc=2.0.3-0ubuntu1~ubuntu14.04.1 lxc1=2.0.3-0ubuntu1~ubuntu14.04.1 liblxc1=2.0.3-0ubuntu1~ubuntu14.04.1 python3-lxc=2.0.3-0ubuntu1~ubuntu14.04.1 cgroup-lite=1.11~ubuntu14.04.2 lxc-templates=2.0.3-0ubuntu1~ubuntu14.04.1 bridge-utils
    
        [email protected]:~#
  4. Ensure the package versions are on the 2.x branch, in this case 2.0.3:
    [email protected]:~# dpkg --list | grep lxc | awk '{print $2,$3}'
    
    liblxc1     2.0.3-0ubuntu1~ubuntu14.04.1
    
    lxc         2.0.3-0ubuntu1~ubuntu14.04.1
    
    lxc-common   2.0.3-0ubuntu1~ubuntu14.04.1
    
    lxc-templates 2.0.3-0ubuntu1~ubuntu14.04.1
    
    lxc1         2.0.3-0ubuntu1~ubuntu14.04.1
    
    lxcfs         2.0.2-0ubuntu1~ubuntu14.04.1
    
    python3-lxc   2.0.3-0ubuntu1~ubuntu14.04.1
    [email protected]:~#

LXC directory installation layout

The following table shows the directory layout of LXC that is created after package and source installation. The directories vary depending on distribution and installation method.

Ubuntu package

CentOS package

Source installation

Description

/usr/share/lxc

/usr/share/lxc

/usr/local/share/lxc

LXC base directory

/usr/share/lxc/config

/usr/share/lxc/config

/usr/local/share/lxc/config

Collection of distribution based LXC configuration files

/usr/share/lxc/templates

/usr/share/lxc/templates

/usr/local/share/lxc/templates

Collection of container template scripts

/usr/bin

/usr/bin

/usr/local/bin

Location for most LXC binaries

/usr/lib/x86_64-linux-gnu

/usr/lib64

/usr/local/lib

Location of liblxc libraries

/etc/lxc

/etc/lxc

/usr/local/etc/lxc

Location of default LXC config files

/var/lib/lxc/

/var/lib/lxc/

/usr/local/var/lib/lxc/

Location of the root filesystem and config for created container

/var/log/lxc

/var/log/lxc

/usr/local/var/log/lxc

LXC log files

 We will explore most of the directories during building, starting and terminating of LXC containers.

Building and manipulating LXC containers

Managing the container life cycle with the provided userspace tools is quite convenient compared to manually creating namespaces and applying resource limits with cgroups. In essence this is exactly what the LXC tools do, creation and manipulation of the namespaces through calls to the liblxc API and cgroups.

LXC comes packaged with various templates for building root file systems for different Linux distributions. We can use them to create a variety of container flavors.

Building our first container

We can create our first container by using a template. The lxc-download file, like the rest of the templates in the templates directory, is a script written in bash:

[email protected]:~# ls -la /usr/share/lxc/templates/

drwxr-xr-x 2 root root 4096 Aug 29 20:03 .

drwxr-xr-x 6 root root 4096 Aug 29 19:58 ..

-rwxr-xr-x 1 root root 10557 Nov 18 2015 lxc-alpine

-rwxr-xr-x 1 root root 13534 Nov 18 2015 lxc-altlinux

-rwxr-xr-x 1 root root 10556 Nov 18 2015 lxc-archlinux

-rwxr-xr-x 1 root root 9878 Nov 18 2015 lxc-busybox

-rwxr-xr-x 1 root root 29149 Nov 18 2015 lxc-centos

-rwxr-xr-x 1 root root 10486 Nov 18 2015 lxc-cirros

-rwxr-xr-x 1 root root 17354 Nov 18 2015 lxc-debian

-rwxr-xr-x 1 root root 17757 Nov 18 2015 lxc-download

-rwxr-xr-x 1 root root 49319 Nov 18 2015 lxc-fedora

-rwxr-xr-x 1 root root 28253 Nov 18 2015 lxc-gentoo

-rwxr-xr-x 1 root root 13962 Nov 18 2015 lxc-openmandriva

-rwxr-xr-x 1 root root 14046 Nov 18 2015 lxc-opensuse

-rwxr-xr-x 1 root root 35540 Nov 18 2015 lxc-oracle

-rwxr-xr-x 1 root root 11868 Nov 18 2015 lxc-plamo

-rwxr-xr-x 1 root root 6851 Nov 18 2015 lxc-sshd

-rwxr-xr-x 1 root root 23494 Nov 18 2015 lxc-ubuntu

-rwxr-xr-x 1 root root 11349 Nov 18 2015 lxc-ubuntu-cloud

[email protected]:~#

If you examine the scripts closely you’ll notice that most of them create chroot environments, where packages and various configuration files are then installed to create the root filesystem for the selected distribution.

Let’s start by building a container using the lxc-download template, which will ask for the distribution, release and architecture, then use the appropriate template to create the file system and configuration for us:

[email protected]:~# lxc-create -t download -n c1

Setting up the GPG keyring

Downloading the image index

---

DIST     RELEASE ARCH    VARIANT BUILD

---

centos   6       amd64  default 20160831_02:16

centos   6       i386     default 20160831_02:16

centos   7       amd64   default 20160831_02:16

debian   jessie   amd64   default 20160830_22:42

debian   jessie   arm64   default 20160824_22:42

debian   jessie   armel   default 20160830_22:42

...

ubuntu   trusty   amd64  default 20160831_03:49

ubuntu   trusty   arm64  default 20160831_07:50

ubuntu   yakkety s390x  default 20160831_03:49

---

Distribution: ubuntu

Release: trusty

Architecture: amd64

Unpacking the rootfs

---

You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts

and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password

or create user accounts.

[email protected]:~#

Let’s list all containers:

[email protected]:~# lxc-ls -f

NAME                 STATE   IPV4 IPV6 AUTOSTART

----------------------------------------------------

c1                   STOPPED -     -     NO

[email protected]:~#

Depending on the version of LXC some of the command options might be different, read the man page for each of the tools if you encounter errors

Our container is currently not running, let’s start it in the background and increase the log level to DEBUG:

[email protected]:~# lxc-start -n c1 -d -l DEBUG

On some distributions LXC does not create the host bridge when building the first container, which results in an error. If this happens you can create it by running: brctl addbr virbr0

[email protected]:~# lxc-ls -f

NAME                 STATE   IPV4       IPV6 AUTOSTART

----------------------------------------------------------

c1                   RUNNING 10.0.3.190 -     NO

[email protected]:~#

To obtain more information about the container run:

[email protected]:~# lxc-info -n c1

Name:           c1

State:         RUNNING

PID:           29364

IP:             10.0.3.190

CPU use:       1.46 seconds

BlkIO use:     112.00 KiB

Memory use:     6.34 MiB

KMem use:       0 bytes

Link:           vethVRD8T2

TX bytes:     4.28 KiB

RX bytes:     4.43 KiB

Total bytes:   8.70 KiB

[email protected]:~#

The new container is now connected to the host bridge lxcbr0:

[email protected]:~# brctl show

bridge name     bridge id         STP enabled     interfaces

lxcbr0     8000.fea50feb48ac       no        vethVRD8T2

[email protected]:~# ip a s lxcbr0

4: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

   link/ether fe:a5:0f:eb:48:ac brd ff:ff:ff:ff:ff:ff

   inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0

       valid_lft forever preferred_lft forever

   inet6 fe80::465:64ff:fe49:5fb5/64 scope link

       valid_lft forever preferred_lft forever

[email protected]:~# ip a s vethVRD8T2

8: vethVRD8T2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lxcbr0 state UP group default qlen 1000

   link/ether fe:a5:0f:eb:48:ac brd ff:ff:ff:ff:ff:ff

   inet6 fe80::fca5:fff:feeb:48ac/64 scope link

       valid_lft forever preferred_lft forever

[email protected]:~#

By using the download template and not specifying any network settings, the container obtains its IP address from a dnsmasq server that runs on a private network, 10.0.3.0/24 in this case. The host allows the container to connect to the rest of the network and Internet by using NAT rules in iptables:

[email protected]:~# iptables -L -n -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

MASQUERADE all -- 10.0.3.0/24         !10.0.3.0/24

[email protected]:~#

Other containers connected to the bridge will have access to each other and to the host, as long as they are all connected to the same bridge and are not tagged with different VLAN IDs.

Let’s see how the process tree looks like after starting the container:

[email protected]:~# ps axfww

…

1552 ?      S     0:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

29356 ?       Ss     0:00 lxc-start -n c1 -d -l DEBUG

29364 ?       Ss     0:00 _ /sbin/init

29588 ?       S     0:00     _ upstart-udev-bridge --daemon

29597 ?       Ss    0:00     _ /lib/systemd/systemd-udevd --daemon

29667 ?       Ssl   0:00     _ rsyslogd

29688 ?       S     0:00     _ upstart-file-bridge --daemon

29690 ?       S     0:00     _ upstart-socket-bridge --daemon

29705 ?       Ss     0:00     _ dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0

29775 pts/6   Ss+   0:00     _ /sbin/getty -8 38400 tty4

29777 pts/1   Ss+   0:00     _ /sbin/getty -8 38400 tty2

29778 pts/5   Ss+   0:00     _ /sbin/getty -8 38400 tty3

29787 ?       Ss     0:00     _ cron

29827 pts/7   Ss+   0:00     _ /sbin/getty -8 38400 console

29829 pts/0   Ss+   0:00     _ /sbin/getty -8 38400 tty1

[email protected]:~#

Notice the new init child process that was cloned from the lxc-start command. This is PID 1 in the actual container.

Next, let’s attach to the container, list all processes, network interfaces and check connectivity:

[email protected]:~# lxc-attach -n c1

[email protected]:~# ps axfw

PID TTY     STAT   TIME COMMAND

   1 ?        Ss     0:00 /sbin/init

176 ?       S     0:00 upstart-udev-bridge --daemon

185 ?       Ss     0:00 /lib/systemd/systemd-udevd --daemon

255 ?       Ssl   0:00 rsyslogd

276 ?       S     0:00 upstart-file-bridge --daemon

278 ?       S      0:00 upstart-socket-bridge --daemon

293 ?       Ss     0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0

363 lxc/tty4 Ss+   0:00 /sbin/getty -8 38400 tty4

365 lxc/tty2 Ss+   0:00 /sbin/getty -8 38400 tty2

366 lxc/tty3 Ss+   0:00 /sbin/getty -8 38400 tty3

375 ?       Ss     0:00 cron

415 lxc/console Ss+   0:00 /sbin/getty -8 38400 console

417 lxc/tty1 Ss+   0:00 /sbin/getty -8 38400 tty1

458 ?       S     0:00 /bin/bash

468 ?       R+   0:00 ps ax

[email protected]:~# ip a s

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

   inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

   inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

7: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

   link/ether 00:16:3e:b2:34:8a brd ff:ff:ff:ff:ff:ff

   inet 10.0.3.190/24 brd 10.0.3.255 scope global eth0

       valid_lft forever preferred_lft forever

   inet6 fe80::216:3eff:feb2:348a/64 scope link

       valid_lft forever preferred_lft forever

[email protected]:~# ping -c 3 google.com

PING google.com (216.58.192.238) 56(84) bytes of data.

64 bytes from ord30s26-in-f14.1e100.net (216.58.192.238): icmp_seq=1 ttl=52 time=1.77 ms

64 bytes from ord30s26-in-f14.1e100.net (216.58.192.238): icmp_seq=2 ttl=52 time=1.58 ms

64 bytes from ord30s26-in-f14.1e100.net (216.58.192.238): icmp_seq=3 ttl=52 time=1.75 ms

--- google.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2003ms

rtt min/avg/max/mdev = 1.584/1.705/1.779/0.092 ms

[email protected]:~# exit

exit

[email protected]:~#

On some distributions like CentOS, or if installed from source, the dnsmasq server is not configured and started by default. You can either install it and configure it manually, or configure the container with an IP address and a default gateway instead, as I demonstrate later in this article.

Notice how the hostname changed on the terminal once we attached to the container. This is an example of how LXC uses the UTS namespaces.

Let’s examine the directory that was created after building the c1 container:

[email protected]:~# ls -la /var/lib/lxc/c1/

total 16

drwxrwx--- 3 root root 4096 Aug 31 20:40 .

drwx------ 3 root root 4096 Aug 31 21:01 ..

-rw-r--r-- 1 root root 516 Aug 31 20:40 config

drwxr-xr-x 21 root root 4096 Aug 31 21:00 rootfs

[email protected]:~#

The rootfs directory looks like a regular Linux filesystem. You can manipulate the container directly by making changes to the files there, or by using chroot.

To demonstrate this, let’s change the root password of the c1 container not by attaching to it, but by using chroot rootfs:

[email protected]:~# cd /var/lib/lxc/c1/

[email protected]:/var/lib/lxc/c1# chroot rootfs

[email protected]:/# ls -al

total 84

drwxr-xr-x 21 root root 4096 Aug 31 21:00 .

drwxr-xr-x 21 root root 4096 Aug 31 21:00 ..

drwxr-xr-x 2 root root 4096 Aug 29 07:33 bin

drwxr-xr-x 2 root root 4096 Apr 10 2014 boot

drwxr-xr-x 4 root root 4096 Aug 31 21:00 dev

drwxr-xr-x 68 root root 4096 Aug 31 22:12 etc

drwxr-xr-x 3 root root 4096 Aug 29 07:33 home

drwxr-xr-x 12 root root 4096 Aug 29 07:33 lib

drwxr-xr-x 2 root root 4096 Aug 29 07:32 lib64

drwxr-xr-x 2 root root 4096 Aug 29 07:31 media

drwxr-xr-x 2 root root 4096 Apr 10 2014 mnt

drwxr-xr-x 2 root root 4096 Aug 29 07:31 opt

drwxr-xr-x 2 root root 4096 Apr 10 2014 proc

drwx------ 2 root root 4096 Aug 31 22:12 root

drwxr-xr-x 8 root root 4096 Aug 31 20:54 run

drwxr-xr-x 2 root root 4096 Aug 29 07:33 sbin

drwxr-xr-x 2 root root 4096 Aug 29 07:31 srv

drwxr-xr-x 2 root root 4096 Mar 13 2014 sys

drwxrwxrwt 2 root root 4096 Aug 31 22:12 tmp

drwxr-xr-x 10 root root 4096 Aug 29 07:31 usr

drwxr-xr-x 11 root root 4096 Aug 29 07:31 var

[email protected]:/# passwd

Enter new UNIX password:

Retype new UNIX password:

passwd: password updated successfully

[email protected]:/# exit

exit

[email protected]:/var/lib/lxc/c1#

Notice how the path changed on the console when we used chroot and after exiting the jailed environment.

To test the root password, let’s install SSH server in the container by first attaching to it and then using ssh to connect:

[email protected]:~# lxc-attach -n c1

[email protected]:~# apt-get update && apt-get install –y openssh-server

[email protected]:~# sed -i 's/without-password/yes/g' /etc/ssh/sshd_config

[email protected]:~# service ssh restart

[email protected]:/# exit

exit

[email protected]:/var/lib/lxc/c1# ssh 10.0.3.190

[email protected]'s password:

Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-91-generic x86_64)

* Documentation: https://help.ubuntu.com/

Last login: Wed Aug 31 22:25:39 2016 from 10.0.3.1

[email protected]:~# exit

logout

Connection to 10.0.3.190 closed.

[email protected]:/var/lib/lxc/c1#

We were able to ssh to the container and use the root password that was manually set earlier.

Autostarting LXC containers

By default LXC containers do not start after a server reboot. To change that, we can use the lxc-autostart tool and the containers configuration file.

To demonstrate this, let’s create a new container first:

[email protected]:~# lxc-create --name autostart_container --template ubuntu

[email protected]:~# lxc-ls -f

NAME               STATE   AUTOSTART GROUPS IPV4 IPV6

autostart_container STOPPED 0        -     -   -

[email protected]:~#

Next, add the lxc.start.auto stanza to its config file:

[email protected]:~# echo "lxc.start.auto = 1" >> /var/lib/lxc/autostart_container/config

[email protected]:~#

List all containers that are configured to start automatically:

[email protected]:~# lxc-autostart --list

autostart_container

[email protected]:~#

Now we can use the lxc-autostart command again to start all containers configured to autostart, in this case just one:

[email protected]:~# lxc-autostart --all

[email protected]:~# lxc-ls -f

NAME                STATE   AUTOSTART GROUPS IPV4     IPV6

autostart_container RUNNING 1         -     10.0.3.98 –

[email protected]:~#

Two other useful autostart configuration parameters are adding a delay to the start and defining a group in which multiple containers can start as a single unit. Stop the container and add the following to configuration options:

[email protected]:~# lxc-stop --name autostart_container

[email protected]:~# echo "lxc.start.delay = 5" >> /var/lib/lxc/autostart_container/config

[email protected]:~# echo "lxc.group = high_priority" >> /var/lib/lxc/autostart_container/config

[email protected]:~#

Next, lets list the containers configured to autostart again:

[email protected]:~# lxc-autostart --list

[email protected]:~#

Notice that no containers showed from the preceding output. This is because our container now belongs to an autostart group. Let’s specify it:

[email protected]:~# lxc-autostart --list --group high_priority

autostart_container 5

[email protected]:~#

Similarly to start all containers belong to a given autostart group:

[email protected]:~# lxc-autostart --group high_priority

[email protected]:~# lxc-ls -f

NAME               STATE   AUTOSTART GROUPS       IPV4     IPV6

autostart_container RUNNING 1         high_priority 10.0.3.98 -

[email protected]:~#

In order for lxc-autostart to automatically start containers after a server reboot, it first needs to be started. This can be achieved by either adding the preceding command in crontab, or creating an init script.

Finally in order to clean up run:

[email protected]:~# lxc-destroy --name autostart_container

Destroyed container autostart_container

[email protected]:~# lxc-ls -f

[email protected]:~#

LXC container hooks

LXC provides a convenient way to execute programs during the life cycle of containers. The following table summarizes the various configuration options available to allow for this feature:

Option

Description

lxc.hook.pre-start

A hook to be run in the host namespace before the container ttys, consoles, or mounts are loaded.

lxc.hook.pre-mount

A hook to be run in the container’s filesystem namespace, but before the rootfs has been set up.

lxc.hook.mount

A hook to be run in the container after mounting has been done, but before the pivot_root.

lxc.hook.autodev

A hook to be run in the container after mounting has been done and after any mount hooks have run, but before the pivot_root.

lxc.hook.start

A hook to be run in the container right before executing the container’s init.

lxc.hook.stop

A hook to be run in the host’s namespace after the container has been shut down.

lxc.hook.post-stop

A hook to be run in the host’s namespace after the container has been shut down.

lxc.hook.clone

A hook to be run when the container is cloned.

lxc.hook.destroy

A hook to be run when the container is destroyed.

 To demonstrate this, let’s create a new container and write a simple script that will output the values of four LXC variables to a file, during container start.

First, create the container and add the lxc.hook.pre-start option to its configuration file:

[email protected]:~# lxc-create --name hooks_container --template ubuntu

[email protected]:~# echo "lxc.hook.pre-start = /var/lib/lxc/hooks_container/pre_start.sh" >> /var/lib/lxc/hooks_container/config

[email protected]:~#

Next, create a simple bash script and make it executable:

[email protected]:~# cat /var/lib/lxc/hooks_container/pre_start.sh

#!/bin/bash

LOG_FILE=/tmp/container.log

echo "Container name: $LXC_NAME" | tee -a $LOG_FILE

echo "Container mounted rootfs: $LXC_ROOTFS_MOUNT" | tee -a $LOG_FILE

echo "Container config file $LXC_CONFIG_FILE" | tee -a $LOG_FILE

echo "Container rootfs: $LXC_ROOTFS_PATH" | tee -a $LOG_FILE

[email protected]:~#

[email protected]:~# chmod u+x /var/lib/lxc/hooks_container/pre_start.sh

[email protected]:~#

Start the container and check the contents of the file that the bash script should have written to, ensuring the script got triggered:

[email protected]:~# lxc-start --name hooks_container

[email protected]:~# lxc-ls -f

NAME           STATE   AUTOSTART GROUPS IPV4       IPV6

hooks_container RUNNING 0         -     10.0.3.237 -

[email protected]:~# cat /tmp/container.log

Container name: hooks_container

Container mounted rootfs: /usr/lib/x86_64-linux-gnu/lxc

Container config file /var/lib/lxc/hooks_container/config

Container rootfs: /var/lib/lxc/hooks_container/rootfs

[email protected]:~#

From the preceding output we can see that the script got triggered when we started the container and the value of the LXC variables got written to the temp file.

Attaching directories from the host OS and exploring the running filesystem of a container

The root filesystem of LXC containers is visible from the host OS as a regular directory tree. We can directly manipulate files in a running container by just making changes in that directory. LXC also allows for attaching directories from the host OS inside of the container using bind mount. A bind mount is a different view of the directory tree. It achieves this by replicating the existing directory tree under a different mount point.

To demonstrate this, let’s create a new container, directory and a file on the host:

[email protected]:~# mkdir /tmp/export_to_container

[email protected]:~# hostname -f > /tmp/export_to_container/file

[email protected]:~# lxc-create --name mount_container --template ubuntu

[email protected]:~#

Next, we are going to use the lxc.mount.entry option in the configuration file of the container, telling LXC what directory to bind mount from the host and the mount point inside the container to bind to:

[email protected]:~# echo "lxc.mount.entry = /tmp/export_to_container/ /var/lib/lxc/mount_container/rootfs/mnt none ro,bind 0 0" >> /var/lib/lxc/mount_container/config

[email protected]:~#

Once the container is started we can see that the /mnt inside of it now contains the file that we created in /tmp/export_to_container directory on the host OS earlier:

[email protected]:~# lxc-start --name mount_container

[email protected]:~# lxc-attach --name mount_container

[email protected]_container:~# cat /mnt/file

ubuntu

[email protected]_containerr:~# exit

exit

[email protected]:~#

When an LXC container is in a running state some files are only visible from /proc on the host OS. To examine the running directory of a container, first grab its PID:

[email protected]:~# lxc-info --name mount_container

Name:           mount_container

State:         RUNNING

PID:           8594

IP:             10.0.3.237

CPU use:       1.96 seconds

BlkIO use:     212.00 KiB

Memory use:     8.50 MiB

KMem use:       0 bytes

Link:           vethBXR2HO

TX bytes:     4.74 KiB

RX bytes:     4.73 KiB

Total bytes:   9.46 KiB

[email protected]:~#

With the PID in hand we can examine the running directory of the container:

[email protected]:~# ls -la /proc/8594/root/run/

total 44

drwxr-xr-x 10 root root 420 Sep 14 23:28 .

drwxr-xr-x 21 root root 4096 Sep 14 23:28 ..

-rw-r--r-- 1 root root   4 Sep 14 23:28 container_type

-rw-r--r-- 1 root root   5 Sep 14 23:28 crond.pid

---------- 1 root root   0 Sep 14 23:28 crond.reboot

-rw-r--r-- 1 root root   5 Sep 14 23:28 dhclient.eth0.pid

drwxrwxrwt 2 root root   40 Sep 14 23:28 lock

-rw-r--r-- 1 root root 112 Sep 14 23:28 motd.dynamic

drwxr-xr-x 3 root root 180 Sep 14 23:28 network

drwxr-xr-x 3 root root 100 Sep 14 23:28 resolvconf

-rw-r--r-- 1 root root   5 Sep 14 23:28 rsyslogd.pid

drwxr-xr-x 2 root root   40 Sep 14 23:28 sendsigs.omit.d

drwxrwxrwt 2 root root   40 Sep 14 23:28 shm

drwxr-xr-x 2 root root   40 Sep 14 23:28 sshd

-rw-r--r-- 1 root root   5 Sep 14 23:28 sshd.pid

drwxr-xr-x 2 root root   80 Sep 14 23:28 udev

-rw-r--r-- 1 root root   5 Sep 14 23:28 upstart-file-bridge.pid

-rw-r--r-- 1 root root   4 Sep 14 23:28 upstart-socket-bridge.pid

-rw-r--r-- 1 root root   5 Sep 14 23:28 upstart-udev-bridge.pid

drwxr-xr-x 2 root root   40 Sep 14 23:28 user

-rw-rw-r-- 1 root utmp 2688 Sep 14 23:28 utmp

[email protected]:~#

Make sure you replace the PID with the output of lxc-info from your host, as it will differ from the above example.

In order to make persistent changes in the root filesystem of a container, modify the files in /var/lib/lxc/mount_container/rootfs/ instead.

Freezing a running container

LXC takes advantage of the freezer cgroup to freeze all the processes running inside of a container. The processes will be in a blocked state until thawed. Freezing a container can be useful in cases where the system load is high and you want to free some resources without actually stopping the container and preserve its running state.

Ensure you have a running container and check its state from the freezer cgroup:

[email protected]:~# lxc-ls -f

NAME           STATE   AUTOSTART GROUPS IPV4       IPV6

hooks_container RUNNING 0         -     10.0.3

[email protected]:~# cat /sys/fs/cgroup/freezer/lxc/hooks_container/freezer.state

THAWED

[email protected]:~#

Notice how a currently running container shows as thawed. Let’s freeze it:

[email protected]:~# lxc-freeze -n hooks_container

[email protected]:~# lxc-ls -f

NAME           STATE AUTOSTART GROUPS IPV4       IPV6

hooks_container FROZEN 0         -     10.0.3.237 –

[email protected]:~#

The container state shows as frozen, let’s check the cgroup file:

[email protected]:~# cat /sys/fs/cgroup/freezer/lxc/hooks_container/freezer.state

FROZEN

[email protected]:~#

To unfreeze it run:

[email protected]:~# lxc-unfreeze --name hooks_container

[email protected]:~# lxc-ls -f

NAME           STATE   AUTOSTART GROUPS IPV4       IPV6

hooks_container RUNNING 0         -     10.0.3.237 -

[email protected]:~# cat /sys/fs/cgroup/freezer/lxc/hooks_container/freezer.state

THAWED

[email protected]:~#

We can monitor the state change by running the lxc-monitor command on a separate console while freezing and unfreezing a container. The change of the container’s state will show as the following:

[email protected]:~# lxc-monitor --name hooks_container

'hooks_container' changed state to [FREEZING]

'hooks_container' changed state to [FROZEN]

'hooks_container' changed state to [THAWED]

Limiting container resource usage

LXC comes with tools that are just as straightforward and easy to use.

Lets start by setting up the available memory to a container to 512MB:

[email protected]:~# lxc-cgroup -n hooks_container memory.limit_in_bytes 536870912

[email protected]:~#

We can verify that the new setting has been applying by directly inspecting the memory cgroup for the container:

[email protected]:~# cat /sys/fs/cgroup/memory/lxc/hooks_container/memory.limit_in_bytes

536870912

[email protected]:~#

Changing the value only requires running the same command again. Let’s change the available memory to 256 MB and inspect the container by attaching to it and running the free utility:

[email protected]:~# lxc-cgroup -n hooks_container memory.limit_in_bytes 268435456

[email protected]:~# cat /sys/fs/cgroup/memory/lxc/hooks_container/memory.limit_in_bytes

268435456

[email protected]:~# lxc-attach --name hooks_container

[email protected]_container:~# free -m

             total       used       free     shared   buffers     cached

Mem:           256         63       192         0          0         54

-/+ buffers/cache:         9       246

Swap:           0         0         0

[email protected]_container:~# exit

[email protected]:~#

As the preceding output shows the container only sees 256 MB of total available memory.

Similarly we can pin a CPU core to container. In the next example our test server has two cores. Let’s allow the container to only run on core 0:

[email protected]:~# cat /proc/cpuinfo | grep processor

processor      : 0

processor      : 1

[email protected]:~#

[email protected]:~# lxc-cgroup -n hooks_container cpuset.cpus 0

[email protected]:~# cat /sys/fs/cgroup/cpuset/lxc/hooks_container/cpuset.cpus

0

[email protected]:~# lxc-attach --name hooks_container

[email protected]_container:~# cat /proc/cpuinfo | grep processor

processor       : 0

[email protected]_container:~# exit

exit

[email protected]:~#

By attaching to the container and checking the available CPUs we see that only one is presented, as expected.

To make changes persist server reboots we need to add them to the configuration file of the container:

[email protected]:~# echo "lxc.cgroup.memory.limit_in_bytes = 536870912" >> /var/lib/lxc/hooks_container/config

[email protected]:~#

Setting various other cgroup parameters is done in a similar way. For example let’s see the CPU shares and the block IO on a container:

[email protected]:~# lxc-cgroup -n hooks_container cpu.shares 512

[email protected]:~# lxc-cgroup -n hooks_container blkio.weight 500

[email protected]:~# lxc-cgroup -n hooks_container blkio.weight

500

[email protected]:~#

Summary

In this article we demonstrated how to install LXC, build containers using the provided templates and showed some basic operations to manage the instances.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here