Nov 062019
 

My activities in October were mostly related to updating my COPR repositories for CentOS 8 and cleaning up the old repositories:

  • I updated the ganto/jo COPR repository to support CentOS 8.
  • I updated the ganto/vcsh COPR repository to support CentOS 8 and added package builds for the alternative architectures (aarch64 and ppc64le).
  • Thanks to the help of jmontleon I was finally able to build LXD which is available in my ganto/lxc3 repository for CentOS 8. I also updated the RPM for the latest stable release LXD 3.18.
  • After years of development the distrobuilder tool which is meant to replace the shellscript-based LXC templates was tagged in a first 1.0 release that should now also be able to build CentOS 8 container images. Of course I updated the corresponding RPM in the ganto/lxc3 COPR repository accordingly. I’m not sure how they decide to do new releases therefore I might decide to go back building regular git snapshot releases of this tool in the future.
  • I updated the ganto/goaccess COPR repository to support CentOS 8 and also increased the built goaccess version to a git snapshot from May 2019 based on version 1.3. Unfortunately the official Fedora package is still only at version 1.2. I was first testing the latest git snapshot but then found that it is affected by a bug (GitHub issue 1575) which would fail to render the access graphs properly.
  • The last COPR repository pending an update for CentOS 8 is ganto/umoci which still fails because of go-md2man missing from EPEL 8.
  • I deleted some outdated COPR repositories (ganto/lxc, ganto/lxd, ganto/lxdock) and archived the related GitHub repositories holding the RPM spec files.

Then I was also experimenting with adding Debian machines to a CentOS FreeIPA identity management server via Ansible. Years ago I wrote an Ansible role freeipa-client which was able to do that but still required manual setup of the Kerberos keytab on the client machine. I plan to replace that with a collection of new roles trying to blend-in with DebOps as much as possible. But unfortunately nothing ready to show yet.

Finally, as always, I updated a lot of ebuilds in my linuxmonk-overlay Gentoo overlay.

Oct 012019
 

I’m starting a new series of blog posts summarizing my various activities regarding free software projects. There might not be every month something worth mentioning, but this month I was quite busy what might be interesting for some of you.

Following I’ll list some of the activities I was involved regarding free software projects in September:

  • After the official release of CentOS 8, I started rebuilding the packages in my lxc3 COPR repository for CentOS 8. The lxd package is still missing and I’m planning to provide it for CentOS 8 together with the pending update to lxd-3.17. A rebuild of the packages in my various other COPR repositories can be expected in the coming weeks.
  • Being the package maintainer of the spectre-meltdown-checker package in Fedora and EPEL, I followed the instructions to request a package branch for epel-8. This was approved a few hours ago, so the packages is now available via Koji and awaits approval in Bodhi for inclusion into the EPEL testing and eventually stable repository. Please give some karma if you’d like to accelerate this.
  • I merged some pull-requests in the Gentoo go-overlay git repository where the original maintainer entrusted me with commit permissions. Because he didn’t participate since last December, I used the chance to cleanup the repository to pass the repoman checks again and eventually merged a PR for the latest traeffik 1.x (1.7.18) release.
  • I put some effort into packaging the Gnome 3.34 release in my personal Gentoo linuxmonk-overlay. Of course I’m running it on my main workstation on top of Wayland without any major issues so far. Give it a try if you can’t wait for the official ebuilds to be ready.
  • I released version 0.1.2 of my acme-tiny Ansible role which fixes an annoying bug. It could happen that if the certificate renewal was unsuccessful, a still valid certificate would have been overwritten with an empty file. Now the role will make a backup copy of the old certificate by default and validate the new certificate before replacing the old one.
Dec 202016
 

Since a long time I’m using and following the development of the LXC (Linux Container) project. I feel that it unfortunately never really had the success it deserved and in the recent years new technologies such as Docker and rkt pretty much redefined the common understanding of a container according to their own terms. Nonetheless LXC still claims its niche as full Linux operating system container solution especially suited for persistent pet containers, an area where the new players on the market are still in the stage of figuring out how to implement this properly according to their concept. LXC development hasn’t stalled, quite the contrary, they extended the API with a HTTP REST interface (served via Linux Container Daemon, LXD), implemented support for container live-migration, added container image management and much more. This means that there are a lot of reasons why someone, including me, would want to use Linux containers and LXD.

Enable LXD COPR repository
LXD is not officially packaged for Fedora. Therefore I spent the last few weeks by creating some community packages via their COPR build system and repository service. Similar to the better known Ubuntu PPA (Personal Package Archive) system, COPR provides a RPM package repository which can easily be consumed by Fedora users. To use the LXD repository, all you need to do is enabling it via dnf:

# dnf copr enable ganto/lxd

Please note that COPR packages are not reviewed by the Fedora package maintainers therefore you should only install packages where you trust the author. For this reason I also provide a Github repository with the RPM spec files, so that everyone could also build the RPMs on their own if they feel uncomfortable using the pre-built RPMs from the repository.

Install and start LXD
LXD is split into multiple packages. The important ones are lxd, the Linux Container Daemon and lxd-client, the LXD client binary called lxc. Install them with:

# dnf install lxd lxd-client

Unfortunately I didn’t had time to figure out the correct SELinux labels for LXD yet, therefore you need to disable SELinux prior to starting the daemon. LXD supports user namespaces to map the root user in a container to an unprivileged user ID on the container host. For this you need to assign an UID range on the host:

# echo "root:1000000:65536" >> /etc/subuid
# echo "root:1000000:65536" >> /etc/subgid

If you don’t do this, user namespaces won’t be used which is indicated by a message such as:

lvl=warn msg="Error reading idmap" err="User \"root\" has no subuids."
lvl=warn msg="Only privileged containers will be able to run"

Eventually start LXD with:

# systemctl start lxd.service

LXD configuration
LXD doesn’t have a configuration file. Configuration properties must be set and retrieved via client commands. Here you can find a list of all supported configuration properties. Most tutorials will suggest to initially run lxd init which would generate a basic configuration. However there is only a limited set of configuration options available via this command and therefore I prefer to set the properties via LXD client. A normal user account can be used to manage LXD via client when it’s a member of the lxd POSIX group:

# usermod --append --groups lxd myuser

By default LXD will store its images and containers in directories under /var/lib/lxd. Alternatives storage back-ends such as LVM, Btrfs or ZFS are available. Here I will show an example how to use LVM. Similar to the recommended Docker setup on Fedora it will use LVM thin volumes to store images and containers. First create a LVM thin pool. For this we still need some space available on the default volume group. Alternatively you can use a second disk with a dedicated volume group. Replace vg00 with the volume group name you want to use:

# lvcreate --size 20G --type thin-pool --name lxd-pool vg00

Now we set this thin pool as storage back-end in LXD:

$ lxc config set storage.lvm_vg_name vg00
$ lxc config set storage.lvm_thinpool_name lxd-pool

For each image which is downloaded LXD will create a thin volume storing the image. If a new container is instantiated a new writeable snapshot will be created from which you can create an image again or make further snapshots for fast roll-back. By default the container file system will be ext4. If you prefer XFS, it can be set with the following command:

$ lxc config set storage.lvm_fstype xfs

Also for networking various options are available. If you ran lxd init, you may have already created a lxdbr0 network bridge. Otherwise I will show you how to manually create one in case you want a dedicated container bridge or attach LXD to an already existing bridge which would be configured through an external DHCP server.

To create a dedicated network bridge where the traffic will be NAT‘ed to the outside, run:

$ lxc network create lxdbr0

This will create a bridge device with the given name and also start-up a dedicated instance of dnsmasq which will act as DNS and DHCP server for the container network.

A big advantage of LXD in comparison to plain LXC is a feature called container profiles. There you can define settings which should be applied to a new container instance. In our case, we now want containers to use the network bridge created before or any other network bridge which was created independently. For this it will be added to the “default” profile which is applied by default when creating a new container:

$ lxc network attach-profile lxdbr0 default eth0

The eth0 is the network device name which will be used inside the container. We could also add multiple network bridges or create multiple profiles (lxc profile create newprofile) with different network settings.

Create a container
Finally we have the most important pieces together to launch a container. A container is always instantiated from an image. The LXC projects provides an image repository with a big number of prebuilt container images pre-configured under the remote name images:. The images are regular LXC containers created via upstream lxc-create script using the various distribution templates. To list the available images run:

$ lxc image list images:

If you found an image you want to run, it can be started as following. Of course in my example I will use a Fedora 24 container (unfortunately there are no Fedora 25 containers available yet, but I’m also working on that):

$ lxc launch images:fedora/24 my-fedora-container

With the following command you can create a console session into the container:

$ lxc exec my-fedora-container /bin/bash

I hope this short guide made you curious to try LXD on Fedora. I’m glad to hear some feedback via comments or Email if you find this guide or the my COPR repository useful or if you have some corrections or found some issues.

Further reading
If you want to know more about how to use the individual features of LXD, I can recommend the how-to series of Stéphane Graber, one of the core developers of LXC/LXD: