Dec 012015
 

I recently had the challenge to migrate a physical host running Debian, installed on a ancient 32bit machine with a software RAID 1, to a virtual machine (VM). The migration should be as simple as possible so I decided to attach the original disks to the virtualization host and define a new VM which would boot from these disks. This doesn’t sound too complicated but it still included some steps which I was not so familiar with, so I write them down here for later reference. As an experiment I tried the migration on a KVM and a Xen host.

Prepare original host for virtualized environment

Enable virtualized disk drivers
The host I wanted to migrate was running a Debian Jessie i686 installation on bare metal hardware. For booting it uses an initramfs which loads the required disk controller drivers. By default the initramfs only includes the drivers for the current hardware, so the paravirtualization drivers for KVM (virtio-blk) or Xen (xen-blkfront) are missing. This can be changed by adjusting the /etc/initramfs-tools/conf.d/driver-policy file to state that not the currently dependent (dep) modules should be loaded, but most:

# Driver inclusion policy selected during installation
# Note: this setting overrides the value set in the file
# /etc/initramfs-tools/initramfs.conf
MODULES=most

Afterwards the initramfs must be rebuilt by running the following command (adjust the kernel version according to the kernel you are running):

# dpkg-reconfigure linux-image-3.16.0-4-686-pae

Now the initramfs contains all required modules to successfully boot the host from a KVM virtio or a Xen blockfront disk device. This can be checked with:

# lsinitramfs /boot/initrd.img-3.16.0-4-686-pae | egrep -i "(xen|virtio).*blk.*\.ko"
lib/modules/3.16.0-4-686-pae/kernel/drivers/block/virtio_blk.ko
lib/modules/3.16.0-4-686-pae/kernel/drivers/block/xen-blkback/xen-blkback.ko
lib/modules/3.16.0-4-686-pae/kernel/drivers/block/xen-blkfront.ko

Enable serial console
To easily access the server console from the virtualization host, it’s required to setup a serial console in the guest, which must be defined in /etc/default/grub:

[...]
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
[...]
# Uncomment to disable graphical terminal (grub-pc only)
GRUB_TERMINAL="console serial"

After regenerating the Grub configuration, the boot loader and local console will then also be accessible via virsh console:

# grub-mkconfig -o /boot/grub/grub.cfg
Run the host on a KVM hypervisor

KVM is nowadays the de-facto default virtualization solution for running Linux on Linux and is very well integrated in all major distributions and a wide range of management tools. Therefore it’s really simple to boot the disks into a KVM VM:

  • Make sure the disks are not already assembled as md-device on the KVM host. I did this by uninstalling the mdadm utility.
  • Run the following virt-install command to define and start the VM:
    # virt-install --connect qemu:///system --name myhost --memory 1024 --vcpus 2 --cpu host --os-variant debian7 --arch i686 --import --disk /dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4527214 --disk /dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4366412 --network bridge=virbr0 --graphics=none

To avoid confusion with the disk enumeration of the KVM host kernel, the disk devices are given as unique device paths instead of e.g. /dev/sdb and /dev/sdc. This clearly identifies the two disk devices which will then be assembled to a RAID 1 by the guest kernel.

Run the host on a Xen hypervisor

Xen has been a topic in this blog a few times before as I’m still using it since its early days. After a long time of major code refactorings it also made huge steps forward within the last few years, so that nowadays it’s nearly as easy to setup as KVM and again well supported in the large distributions. Because its architecture is a bit different and Linux is not the only supported platform some things work not quite the same as under KVM. Let’s find out…

Prepare Xen Grub bootloader
Xen virtual machines can be booted in various different ways, also depending whether the guest disk is running on an emulated disk controller or as paravirtualized disk. In simple cases, this is done with help of PyGrub loading a Grub Legacy grub.conf or a Grub 2 grub.cfg from a guests /boot partition. As my grub.cfg and kernel is “hidden” in a RAID 1 a fully featured Grub 2 is required. It is perfectly able to load its configuration from advanced disk setups such as RAID arrays, LVM volumes and more. With Xen such a setup is only possible when using a dedicated boot loader disk image which must be in the target architecture of the virtual machine. As I want to run a 32bit virtual machine, a i386-xen Grub 2 flavor needs to be compiled as basis for the boot loader image:

  • Clone the Grub 2 source code:
    $ git clone git://git.savannah.gnu.org/grub.git
  • Configure and build the source code. I was running the following commands on a Gentoo Xen server without issues. On other distributions you might need to install some development packages first:
    $ cd grub
    $ ./autogen.sh
    $ ./configure --target=i386 --with-platform=xen
    $ make -j3
  • Prepare a grub.cfg for the Grub image. The Grub 2 from the Debian guest host was installed on a mirrored boot disk called /dev/md0 which was mounted to /boot. Therefore the Grub image must be hinted to load the actual configuration from there:
    # grub.cfg for Xen Grub image grub-md0-i386-xen.img
    normal (md/0)/grub/grub.cfg
  • Create the Grub image using the previously compiled Grub modules:
    # grub-mkimage -O i386-xen -c grub.cfg -o /var/lib/libvirt/images/grub-md0-i386-xen.img -d ~user/grub/grub-core ~user/grub/grub-core/*.mod

A more generic guide about generating Xen Grub images can be found at Using Grub 2 as a bootloader for Xen PV guests.

Define the virtual machine
virt-install can also be used to import disk (images) as Xen virtual machines. This works perfectly fine for e.g. importing Atomic Host raw images, but unfortunately the setup now is a bit more complicated. It requires me to specify the custom Grub 2 image as guest kernel for which I couldn’t find a corresponding virt-install argument (using v1.3.0). Therefore I first created a plain Xen xl configuration file and then converted it to a libvirt XML file. The initial configuration /etc/xen/myhost.cfg looks as following:

name = "myhost"
kernel = "/var/lib/libvirt/images/grub-md0-i386-xen.img"
memory = 1024
vcpus = 2
vif = [ 'bridge=virbr0' ]
disk = [ '/dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4527214,raw,xvda,rw', '/dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4366412,raw,xvdb,rw' ]

Eventually this configuration can be converted to a libvirt domain XML with the following command:

# virsh domxml-from-native --format xen-xl --config /etc/xen/myhost.cfg > /etc/libvirt/libxl/myhost.xml
# virsh --connect xen:/// define /etc/libvirt/libxl/myhost.xml
Summary

At the end, I successfully converted the bare metal host to a KVM and/or Xen virtual machine. Once again Xen ended up being a bit more troublesome, but no nasty hacks were required and both configurations were quite straight forward. The boot loader setup for Xen could definitely be a bit simpler, but still it’s nice to see how well Grub 2 and Xen play together now. It’s also pleasing to see, that no virtualization specific configurations had to be made within the guest installation. The fact that the disk device names were changing from /dev/sd[ab] (bare metal) to /dev/vd[ab] (KVM) and /dev/xvd[ab] (Xen) was completely absorbed by the mdadm layer, but even without software RAID the /etc/fstab entries are nowadays generated by the installers in a way, that such migrations are easy possible.

Thanks for reading. If you have some comments, don’t hesitate to write a note below.

Oct 182009
 

Since I was using the nice FineGradePermissions feature of the trac 0.11 release, and Debian was only providing trac-0.10.3 in Etch, I had a custom trac installation running on my Etch server. For migrating to Lenny you would normally think that it’s enough to just copy your project directory to the new installation. Unfortunately this results in a nasty error message:

DatabaseError: file is encrypted or is not a database

Hmn, so let’s check the trac migration guide which advises you to first export the sqlite database with sqlite3 in a plain SQL file. Not much luck here either, the result is an empty database:

# sqlite3 trac.db .dump
BEGIN TRANSACTION;
COMMIT;

The reason is the trac installation in Etch was using the python-sqlite-1.0.1 back-end which uses the SQLite 2 format while in Lenny there is python-pysqlite2-2.4.1 which only knows about SQLite 3.

The conversion from SQLite 2 to 3 can be done by first exporting the database with the sqlite tool and then re-importing it with sqlite3:

# sqlite trac.db .dump | sqlite3 trac3.db

More infos about this can be found at the trac upgrade notes from 0.8.x to 0.9.

Finally your trac installation should work again as usual.

Jun 272007
 

Finally I’m back once more 😉 . While I was in military service my Zyxel modem suddenly refused to work with my fli4l router what caused me to switch the modem into router mode and this made it impossible to further access my Web server. That’s the reason for the downtime. It’s very embarrassing that I only write into my blog when there was a service failure before. But this time this should change!

I moved my Web site from my home server to a machine in a hosting center. Together with some friends we were sharing a physical machine with different Xen domains. Generally I did a lot of work with Xen recently therefore I hope to enjoy you with some tricks about it soon. In my Xen domain the new Debian 4.0 Etch is running. With help of debootstrap it is installed in a few minutes. By the way… does someone know the successor of the base-config program in the earlier versions?

Anyway I hope that the site can stay here for a while and you will come back when there are some more news.

Jul 132006
 

Die kürzlich gefundene Kernel-Schwachstelle CVE-2006-2451 der Kernel >2.6.13 und <2.6.16.24 und <2.6.17.4, wodurch ein entfernter Angreifer root-Privilegien erhalten kann, wurde am prominenten Beispiel des Debian Entwicklungsservers Gluck demonstriert. Dank der schnellen Reaktion der Administratoren wurde nur /bin/ping komprimitiert. Sonst wurden noch keine veränderten Daten/Packete oder Programme entdeckt. Das zeigt wieder einmal, dass man als Administrator gut dran tut seine Kernelversionen aktuell zu halten und die Maschinen genau zu überwachen.

Quelle: debian.org: Debian Server restored after Compromise

Update – 7. August 2006: Für alle, welche noch einen der betroffenen Kernel auf einem produktiven System einsetzen, der Standardkernel (2.6.8) von Debian Sarge ist übrigens nicht betroffen, gibt es ein Workaround: Mitigating against recent GNU/Linux kernel bugs