Dec 202016
 

Since a long time I’m using and following the development of the LXC (Linux Container) project. I feel that it unfortunately never really had the success it deserved and in the recent years new technologies such as Docker and rkt pretty much redefined the common understanding of a container according to their own terms. Nonetheless LXC still claims its niche as full Linux operating system container solution especially suited for persistent pet containers, an area where the new players on the market are still in the stage of figuring out how to implement this properly according to their concept. LXC development hasn’t stalled, quite the contrary, they extended the API with a HTTP REST interface (served via Linux Container Daemon, LXD), implemented support for container live-migration, added container image management and much more. This means that there are a lot of reasons why someone, including me, would want to use Linux containers and LXD.

Enable LXD COPR repository
LXD is not officially packaged for Fedora. Therefore I spent the last few weeks by creating some community packages via their COPR build system and repository service. Similar to the better known Ubuntu PPA (Personal Package Archive) system, COPR provides a RPM package repository which can easily be consumed by Fedora users. To use the LXD repository, all you need to do is enabling it via dnf:

# dnf copr enable ganto/lxd

Please note that COPR packages are not reviewed by the Fedora package maintainers therefore you should only install packages where you trust the author. For this reason I also provide a Github repository with the RPM spec files, so that everyone could also build the RPMs on their own if they feel uncomfortable using the pre-built RPMs from the repository.

Install and start LXD
LXD is split into multiple packages. The important ones are lxd, the Linux Container Daemon and lxd-client, the LXD client binary called lxc. Install them with:

# dnf install lxd lxd-client

Unfortunately I didn’t had time to figure out the correct SELinux labels for LXD yet, therefore you need to disable SELinux prior to starting the daemon. LXD supports user namespaces to map the root user in a container to an unprivileged user ID on the container host. For this you need to assign an UID range on the host:

# echo "root:1000000:65536" >> /etc/subuid
# echo "root:1000000:65536" >> /etc/subgid

If you don’t do this, user namespaces won’t be used which is indicated by a message such as:

lvl=warn msg="Error reading idmap" err="User \"root\" has no subuids."
lvl=warn msg="Only privileged containers will be able to run"

Eventually start LXD with:

# systemctl start lxd.service

LXD configuration
LXD doesn’t have a configuration file. Configuration properties must be set and retrieved via client commands. Here you can find a list of all supported configuration properties. Most tutorials will suggest to initially run lxd init which would generate a basic configuration. However there is only a limited set of configuration options available via this command and therefore I prefer to set the properties via LXD client. A normal user account can be used to manage LXD via client when it’s a member of the lxd POSIX group:

# usermod --append --groups lxd myuser

By default LXD will store its images and containers in directories under /var/lib/lxd. Alternatives storage back-ends such as LVM, Btrfs or ZFS are available. Here I will show an example how to use LVM. Similar to the recommended Docker setup on Fedora it will use LVM thin volumes to store images and containers. First create a LVM thin pool. For this we still need some space available on the default volume group. Alternatively you can use a second disk with a dedicated volume group. Replace vg00 with the volume group name you want to use:

# lvcreate --size 20G --type thin-pool --name lxd-pool vg00

Now we set this thin pool as storage back-end in LXD:

$ lxc config set storage.lvm_vg_name vg00
$ lxc config set storage.lvm_thinpool_name lxd-pool

For each image which is downloaded LXD will create a thin volume storing the image. If a new container is instantiated a new writeable snapshot will be created from which you can create an image again or make further snapshots for fast roll-back. By default the container file system will be ext4. If you prefer XFS, it can be set with the following command:

$ lxc config set storage.lvm_fstype xfs

Also for networking various options are available. If you ran lxd init, you may have already created a lxdbr0 network bridge. Otherwise I will show you how to manually create one in case you want a dedicated container bridge or attach LXD to an already existing bridge which would be configured through an external DHCP server.

To create a dedicated network bridge where the traffic will be NAT‘ed to the outside, run:

$ lxc network create lxdbr0

This will create a bridge device with the given name and also start-up a dedicated instance of dnsmasq which will act as DNS and DHCP server for the container network.

A big advantage of LXD in comparison to plain LXC is a feature called container profiles. There you can define settings which should be applied to a new container instance. In our case, we now want containers to use the network bridge created before or any other network bridge which was created independently. For this it will be added to the “default” profile which is applied by default when creating a new container:

$ lxc network attach-profile lxdbr0 default eth0

The eth0 is the network device name which will be used inside the container. We could also add multiple network bridges or create multiple profiles (lxc profile create newprofile) with different network settings.

Create a container
Finally we have the most important pieces together to launch a container. A container is always instantiated from an image. The LXC projects provides an image repository with a big number of prebuilt container images pre-configured under the remote name images:. The images are regular LXC containers created via upstream lxc-create script using the various distribution templates. To list the available images run:

$ lxc image list images:

If you found an image you want to run, it can be started as following. Of course in my example I will use a Fedora 24 container (unfortunately there are no Fedora 25 containers available yet, but I’m also working on that):

$ lxc launch images:fedora/24 my-fedora-container

With the following command you can create a console session into the container:

$ lxc exec my-fedora-container /bin/bash

I hope this short guide made you curious to try LXD on Fedora. I’m glad to hear some feedback via comments or Email if you find this guide or the my COPR repository useful or if you have some corrections or found some issues.

Further reading
If you want to know more about how to use the individual features of LXD, I can recommend the how-to series of Stéphane Graber, one of the core developers of LXC/LXD:

Oct 312012
 

As a Linux enthusiast and Gentoo user I was always looking for the perfect boot experience. While I managed to boot my kernel with EFI and grub 2 (as described in my wiki), I still had some troubles with OpenRC playing nice with my LVM-only setup initialized by dracut. Tonight I finally figured out the missing configuration pieces to shut up all warnings on system init.

Initial situation
All my Linux partitions are stored in a single LVM volume group, to stay as flexible as possible:

merkur ~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
[...]
└─sda5 8:5 0 49.5G 0 part
├─vg_merkur-slash (dm-0) 253:0 0 2.5G 0 lvm /
├─vg_merkur-boot (dm-1) 253:1 0 200M 0 lvm /boot
├─vg_merkur-tmp (dm-2) 253:2 0 6G 0 lvm /tmp
├─vg_merkur-swap (dm-3) 253:3 0 4G 0 lvm [SWAP]
├─vg_merkur-var (dm-4) 253:4 0 4G 0 lvm /var
├─vg_merkur-usr (dm-5) 253:5 0 12.8G 0 lvm /usr
└─vg_merkur-opt (dm-6) 253:6 0 8G 0 lvm /opt

My boot toolset currently consists of grub-2.00-r1, kernel-3.6.4, dracut-024, lvm-2.02.95-r4 and openrc-0.11.2

Kernel Configuration
Before compiling the kernel, make sure to include all the required configurations. For this setup, the most important ones are:

CONFIG_BLK_DEV_INITRD
CONFIG_DEVTMPFS
CONFIG_MODULES
CONFIG_SYSVIPC

Dracut Configuration
Before installing dracut, the desired modules have to be configured in /etc/make.conf. :

DRACUT_MODULES="caps lvm mdraid syslog"

For this setup at least the “lvm” module is mandatory. Further dracut was built with the “device-mapper” USE flag enabled.

Altough some Linux developers (especially from Red Hat/Fedora) advice against a separate /usr partition because of many boot time dependencies on this system path, I didn’t bother much to change my years old setup. Since version 014, dracut includes a module to fill this gap (/usr/lib/dracut/modules.d/98usrmount/mount-usr.sh). It simply mounts the /usr partition right after the root file system early in the boot process. Therefore we have to make sure that the dracut modules “usrmount” and “lvm” are included in the initramfs, which was possible without any manual modification of /etc/dracut.conf, when generating the boot image with:

dracut -H

Kernel Command Line Configuration
Dracut runtime parameters are given on the kernel command line in the Grub configuration. To automatically enable the LVM Volume Group and spawning a debug shell in case the boot should fail, I added the following parameters in grub:

root=/dev/vg_merkur/slash rd.lvm.vg=vg_merkur rd.shell

LVM Configuration
Since dracut is now responsible to enable our volume group, the corresponding init script has to be disabled:

rc-update del lvm boot

Fsck and Fstab
When booting the system now, the /etc/init.d/fsck script will complain that it cannot check the file systems which are already mounted. Fortunately, the init script allows us to define that fsck should be only run when specific “fs_passno” values are set. I therefore this value to “1” for the file systems which are mounted by dracut and to “2” for all the file systems which should be checked by OpenRC. Take care, when specifying a value of “0”, the file system will be never checked for consistency:

# [fs] [mountpoint] [type] [opts] [dump/pass]
/dev/vg_merkur/boot /boot ext2 noatime,nosuid,nodev 0 2
/dev/vg_merkur/slash / ext4 noatime,discard 0 1
/dev/vg_merkur/usr /usr ext4 noatime,discard,nodev 0 1
/dev/vg_merkur/var /var ext4 noatime,discard,nosuid,nodev 0 2
/dev/vg_merkur/opt /opt ext4 noatime,discard,nosuid,nodev 0 2
/dev/vg_merkur/tmp /tmp ext4 noatime,discard,nosuid,nodev 0 2
/dev/vg_merkur/swap none swap sw 0 0

In /etc/conf.d/fsck we then can define, that the fsck init script should only care about file systems with a “fs_passno” larger than “1”:

fsck_passno=">1"

That’s it… If you have some questions or hints, please leave a comment.