Dec 202016
 

Since a long time I’m using and following the development of the LXC (Linux Container) project. I feel that it unfortunately never really had the success it deserved and in the recent years new technologies such as Docker and rkt pretty much redefined the common understanding of a container according to their own terms. Nonetheless LXC still claims its niche as full Linux operating system container solution especially suited for persistent pet containers, an area where the new players on the market are still in the stage of figuring out how to implement this properly according to their concept. LXC development hasn’t stalled, quite the contrary, they extended the API with a HTTP REST interface (served via Linux Container Daemon, LXD), implemented support for container live-migration, added container image management and much more. This means that there are a lot of reasons why someone, including me, would want to use Linux containers and LXD.

Enable LXD COPR repository
LXD is not officially packaged for Fedora. Therefore I spent the last few weeks by creating some community packages via their COPR build system and repository service. Similar to the better known Ubuntu PPA (Personal Package Archive) system, COPR provides a RPM package repository which can easily be consumed by Fedora users. To use the LXD repository, all you need to do is enabling it via dnf:

# dnf copr enable ganto/lxd

Please note that COPR packages are not reviewed by the Fedora package maintainers therefore you should only install packages where you trust the author. For this reason I also provide a Github repository with the RPM spec files, so that everyone could also build the RPMs on their own if they feel uncomfortable using the pre-built RPMs from the repository.

Install and start LXD
LXD is split into multiple packages. The important ones are lxd, the Linux Container Daemon and lxd-client, the LXD client binary called lxc. Install them with:

# dnf install lxd lxd-client

Unfortunately I didn’t had time to figure out the correct SELinux labels for LXD yet, therefore you need to disable SELinux prior to starting the daemon. LXD supports user namespaces to map the root user in a container to an unprivileged user ID on the container host. For this you need to assign an UID range on the host:

# echo "root:1000000:65536" >> /etc/subuid
# echo "root:1000000:65536" >> /etc/subgid

If you don’t do this, user namespaces won’t be used which is indicated by a message such as:

lvl=warn msg="Error reading idmap" err="User \"root\" has no subuids."
lvl=warn msg="Only privileged containers will be able to run"

Eventually start LXD with:

# systemctl start lxd.service

LXD configuration
LXD doesn’t have a configuration file. Configuration properties must be set and retrieved via client commands. Here you can find a list of all supported configuration properties. Most tutorials will suggest to initially run lxd init which would generate a basic configuration. However there is only a limited set of configuration options available via this command and therefore I prefer to set the properties via LXD client. A normal user account can be used to manage LXD via client when it’s a member of the lxd POSIX group:

# usermod --append --groups lxd myuser

By default LXD will store its images and containers in directories under /var/lib/lxd. Alternatives storage back-ends such as LVM, Btrfs or ZFS are available. Here I will show an example how to use LVM. Similar to the recommended Docker setup on Fedora it will use LVM thin volumes to store images and containers. First create a LVM thin pool. For this we still need some space available on the default volume group. Alternatively you can use a second disk with a dedicated volume group. Replace vg00 with the volume group name you want to use:

# lvcreate --size 20G --type thin-pool --name lxd-pool vg00

Now we set this thin pool as storage back-end in LXD:

$ lxc config set storage.lvm_vg_name vg00
$ lxc config set storage.lvm_thinpool_name lxd-pool

For each image which is downloaded LXD will create a thin volume storing the image. If a new container is instantiated a new writeable snapshot will be created from which you can create an image again or make further snapshots for fast roll-back. By default the container file system will be ext4. If you prefer XFS, it can be set with the following command:

$ lxc config set storage.lvm_fstype xfs

Also for networking various options are available. If you ran lxd init, you may have already created a lxdbr0 network bridge. Otherwise I will show you how to manually create one in case you want a dedicated container bridge or attach LXD to an already existing bridge which would be configured through an external DHCP server.

To create a dedicated network bridge where the traffic will be NAT‘ed to the outside, run:

$ lxc network create lxdbr0

This will create a bridge device with the given name and also start-up a dedicated instance of dnsmasq which will act as DNS and DHCP server for the container network.

A big advantage of LXD in comparison to plain LXC is a feature called container profiles. There you can define settings which should be applied to a new container instance. In our case, we now want containers to use the network bridge created before or any other network bridge which was created independently. For this it will be added to the “default” profile which is applied by default when creating a new container:

$ lxc network attach-profile lxdbr0 default eth0

The eth0 is the network device name which will be used inside the container. We could also add multiple network bridges or create multiple profiles (lxc profile create newprofile) with different network settings.

Create a container
Finally we have the most important pieces together to launch a container. A container is always instantiated from an image. The LXC projects provides an image repository with a big number of prebuilt container images pre-configured under the remote name images:. The images are regular LXC containers created via upstream lxc-create script using the various distribution templates. To list the available images run:

$ lxc image list images:

If you found an image you want to run, it can be started as following. Of course in my example I will use a Fedora 24 container (unfortunately there are no Fedora 25 containers available yet, but I’m also working on that):

$ lxc launch images:fedora/24 my-fedora-container

With the following command you can create a console session into the container:

$ lxc exec my-fedora-container /bin/bash

I hope this short guide made you curious to try LXD on Fedora. I’m glad to hear some feedback via comments or Email if you find this guide or the my COPR repository useful or if you have some corrections or found some issues.

Further reading
If you want to know more about how to use the individual features of LXD, I can recommend the how-to series of Stéphane Graber, one of the core developers of LXC/LXD:

Dec 012015
 

I recently had the challenge to migrate a physical host running Debian, installed on a ancient 32bit machine with a software RAID 1, to a virtual machine (VM). The migration should be as simple as possible so I decided to attach the original disks to the virtualization host and define a new VM which would boot from these disks. This doesn’t sound too complicated but it still included some steps which I was not so familiar with, so I write them down here for later reference. As an experiment I tried the migration on a KVM and a Xen host.

Prepare original host for virtualized environment

Enable virtualized disk drivers
The host I wanted to migrate was running a Debian Jessie i686 installation on bare metal hardware. For booting it uses an initramfs which loads the required disk controller drivers. By default the initramfs only includes the drivers for the current hardware, so the paravirtualization drivers for KVM (virtio-blk) or Xen (xen-blkfront) are missing. This can be changed by adjusting the /etc/initramfs-tools/conf.d/driver-policy file to state that not the currently dependent (dep) modules should be loaded, but most:

# Driver inclusion policy selected during installation
# Note: this setting overrides the value set in the file
# /etc/initramfs-tools/initramfs.conf
MODULES=most

Afterwards the initramfs must be rebuilt by running the following command (adjust the kernel version according to the kernel you are running):

# dpkg-reconfigure linux-image-3.16.0-4-686-pae

Now the initramfs contains all required modules to successfully boot the host from a KVM virtio or a Xen blockfront disk device. This can be checked with:

# lsinitramfs /boot/initrd.img-3.16.0-4-686-pae | egrep -i "(xen|virtio).*blk.*\.ko"
lib/modules/3.16.0-4-686-pae/kernel/drivers/block/virtio_blk.ko
lib/modules/3.16.0-4-686-pae/kernel/drivers/block/xen-blkback/xen-blkback.ko
lib/modules/3.16.0-4-686-pae/kernel/drivers/block/xen-blkfront.ko

Enable serial console
To easily access the server console from the virtualization host, it’s required to setup a serial console in the guest, which must be defined in /etc/default/grub:

[...]
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
[...]
# Uncomment to disable graphical terminal (grub-pc only)
GRUB_TERMINAL="console serial"

After regenerating the Grub configuration, the boot loader and local console will then also be accessible via virsh console:

# grub-mkconfig -o /boot/grub/grub.cfg
Run the host on a KVM hypervisor

KVM is nowadays the de-facto default virtualization solution for running Linux on Linux and is very well integrated in all major distributions and a wide range of management tools. Therefore it’s really simple to boot the disks into a KVM VM:

  • Make sure the disks are not already assembled as md-device on the KVM host. I did this by uninstalling the mdadm utility.
  • Run the following virt-install command to define and start the VM:
    # virt-install --connect qemu:///system --name myhost --memory 1024 --vcpus 2 --cpu host --os-variant debian7 --arch i686 --import --disk /dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4527214 --disk /dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4366412 --network bridge=virbr0 --graphics=none

To avoid confusion with the disk enumeration of the KVM host kernel, the disk devices are given as unique device paths instead of e.g. /dev/sdb and /dev/sdc. This clearly identifies the two disk devices which will then be assembled to a RAID 1 by the guest kernel.

Run the host on a Xen hypervisor

Xen has been a topic in this blog a few times before as I’m still using it since its early days. After a long time of major code refactorings it also made huge steps forward within the last few years, so that nowadays it’s nearly as easy to setup as KVM and again well supported in the large distributions. Because its architecture is a bit different and Linux is not the only supported platform some things work not quite the same as under KVM. Let’s find out…

Prepare Xen Grub bootloader
Xen virtual machines can be booted in various different ways, also depending whether the guest disk is running on an emulated disk controller or as paravirtualized disk. In simple cases, this is done with help of PyGrub loading a Grub Legacy grub.conf or a Grub 2 grub.cfg from a guests /boot partition. As my grub.cfg and kernel is “hidden” in a RAID 1 a fully featured Grub 2 is required. It is perfectly able to load its configuration from advanced disk setups such as RAID arrays, LVM volumes and more. With Xen such a setup is only possible when using a dedicated boot loader disk image which must be in the target architecture of the virtual machine. As I want to run a 32bit virtual machine, a i386-xen Grub 2 flavor needs to be compiled as basis for the boot loader image:

  • Clone the Grub 2 source code:
    $ git clone git://git.savannah.gnu.org/grub.git
  • Configure and build the source code. I was running the following commands on a Gentoo Xen server without issues. On other distributions you might need to install some development packages first:
    $ cd grub
    $ ./autogen.sh
    $ ./configure --target=i386 --with-platform=xen
    $ make -j3
  • Prepare a grub.cfg for the Grub image. The Grub 2 from the Debian guest host was installed on a mirrored boot disk called /dev/md0 which was mounted to /boot. Therefore the Grub image must be hinted to load the actual configuration from there:
    # grub.cfg for Xen Grub image grub-md0-i386-xen.img
    normal (md/0)/grub/grub.cfg
  • Create the Grub image using the previously compiled Grub modules:
    # grub-mkimage -O i386-xen -c grub.cfg -o /var/lib/libvirt/images/grub-md0-i386-xen.img -d ~user/grub/grub-core ~user/grub/grub-core/*.mod

A more generic guide about generating Xen Grub images can be found at Using Grub 2 as a bootloader for Xen PV guests.

Define the virtual machine
virt-install can also be used to import disk (images) as Xen virtual machines. This works perfectly fine for e.g. importing Atomic Host raw images, but unfortunately the setup now is a bit more complicated. It requires me to specify the custom Grub 2 image as guest kernel for which I couldn’t find a corresponding virt-install argument (using v1.3.0). Therefore I first created a plain Xen xl configuration file and then converted it to a libvirt XML file. The initial configuration /etc/xen/myhost.cfg looks as following:

name = "myhost"
kernel = "/var/lib/libvirt/images/grub-md0-i386-xen.img"
memory = 1024
vcpus = 2
vif = [ 'bridge=virbr0' ]
disk = [ '/dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4527214,raw,xvda,rw', '/dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4366412,raw,xvdb,rw' ]

Eventually this configuration can be converted to a libvirt domain XML with the following command:

# virsh domxml-from-native --format xen-xl --config /etc/xen/myhost.cfg > /etc/libvirt/libxl/myhost.xml
# virsh --connect xen:/// define /etc/libvirt/libxl/myhost.xml
Summary

At the end, I successfully converted the bare metal host to a KVM and/or Xen virtual machine. Once again Xen ended up being a bit more troublesome, but no nasty hacks were required and both configurations were quite straight forward. The boot loader setup for Xen could definitely be a bit simpler, but still it’s nice to see how well Grub 2 and Xen play together now. It’s also pleasing to see, that no virtualization specific configurations had to be made within the guest installation. The fact that the disk device names were changing from /dev/sd[ab] (bare metal) to /dev/vd[ab] (KVM) and /dev/xvd[ab] (Xen) was completely absorbed by the mdadm layer, but even without software RAID the /etc/fstab entries are nowadays generated by the installers in a way, that such migrations are easy possible.

Thanks for reading. If you have some comments, don’t hesitate to write a note below.