I recently had the challenge to migrate a physical host running Debian, installed on a ancient 32bit machine with a software RAID 1, to a virtual machine (VM). The migration should be as simple as possible so I decided to attach the original disks to the virtualization host and define a new VM which would boot from these disks. This doesn’t sound too complicated but it still included some steps which I was not so familiar with, so I write them down here for later reference. As an experiment I tried the migration on a KVM and a Xen host.
Prepare original host for virtualized environment
Enable virtualized disk drivers
The host I wanted to migrate was running a Debian Jessie i686 installation on bare metal hardware. For booting it uses an initramfs which loads the required disk controller drivers. By default the initramfs only includes the drivers for the current hardware, so the paravirtualization drivers for KVM (
virtio-blk) or Xen (
xen-blkfront) are missing. This can be changed by adjusting the /etc/initramfs-tools/conf.d/driver-policy file to state that not the currently dependent (
dep) modules should be loaded, but
# Driver inclusion policy selected during installation # Note: this setting overrides the value set in the file # /etc/initramfs-tools/initramfs.conf MODULES=most
Afterwards the initramfs must be rebuilt by running the following command (adjust the kernel version according to the kernel you are running):
# dpkg-reconfigure linux-image-3.16.0-4-686-pae
Now the initramfs contains all required modules to successfully boot the host from a KVM virtio or a Xen blockfront disk device. This can be checked with:
# lsinitramfs /boot/initrd.img-3.16.0-4-686-pae | egrep -i "(xen|virtio).*blk.*\.ko" lib/modules/3.16.0-4-686-pae/kernel/drivers/block/virtio_blk.ko lib/modules/3.16.0-4-686-pae/kernel/drivers/block/xen-blkback/xen-blkback.ko lib/modules/3.16.0-4-686-pae/kernel/drivers/block/xen-blkfront.ko
Enable serial console
To easily access the server console from the virtualization host, it’s required to setup a serial console in the guest, which must be defined in /etc/default/grub:
[...] GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8" GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1" [...] # Uncomment to disable graphical terminal (grub-pc only) GRUB_TERMINAL="console serial"
After regenerating the Grub configuration, the boot loader and local console will then also be accessible via
# grub-mkconfig -o /boot/grub/grub.cfg
Run the host on a KVM hypervisor
KVM is nowadays the de-facto default virtualization solution for running Linux on Linux and is very well integrated in all major distributions and a wide range of management tools. Therefore it’s really simple to boot the disks into a KVM VM:
- Make sure the disks are not already assembled as md-device on the KVM host. I did this by uninstalling the
- Run the following
virt-installcommand to define and start the VM:
# virt-install --connect qemu:///system --name myhost --memory 1024 --vcpus 2 --cpu host --os-variant debian7 --arch i686 --import --disk /dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4527214 --disk /dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4366412 --network bridge=virbr0 --graphics=none
To avoid confusion with the disk enumeration of the KVM host kernel, the disk devices are given as unique device paths instead of e.g. /dev/sdb and /dev/sdc. This clearly identifies the two disk devices which will then be assembled to a RAID 1 by the guest kernel.
Run the host on a Xen hypervisor
Xen has been a topic in this blog a few times before as I’m still using it since its early days. After a long time of major code refactorings it also made huge steps forward within the last few years, so that nowadays it’s nearly as easy to setup as KVM and again well supported in the large distributions. Because its architecture is a bit different and Linux is not the only supported platform some things work not quite the same as under KVM. Let’s find out…
Prepare Xen Grub bootloader
Xen virtual machines can be booted in various different ways, also depending whether the guest disk is running on an emulated disk controller or as paravirtualized disk. In simple cases, this is done with help of PyGrub loading a Grub Legacy grub.conf or a Grub 2 grub.cfg from a guests
/boot partition. As my grub.cfg and kernel is “hidden” in a RAID 1 a fully featured Grub 2 is required. It is perfectly able to load its configuration from advanced disk setups such as RAID arrays, LVM volumes and more. With Xen such a setup is only possible when using a dedicated boot loader disk image which must be in the target architecture of the virtual machine. As I want to run a 32bit virtual machine, a
i386-xen Grub 2 flavor needs to be compiled as basis for the boot loader image:
- Clone the Grub 2 source code:
$ git clone git://git.savannah.gnu.org/grub.git
- Configure and build the source code. I was running the following commands on a Gentoo Xen server without issues. On other distributions you might need to install some development packages first:
$ cd grub $ ./autogen.sh $ ./configure --target=i386 --with-platform=xen $ make -j3
- Prepare a grub.cfg for the Grub image. The Grub 2 from the Debian guest host was installed on a mirrored boot disk called /dev/md0 which was mounted to /boot. Therefore the Grub image must be hinted to load the actual configuration from there:
# grub.cfg for Xen Grub image grub-md0-i386-xen.img normal (md/0)/grub/grub.cfg
- Create the Grub image using the previously compiled Grub modules:
# grub-mkimage -O i386-xen -c grub.cfg -o /var/lib/libvirt/images/grub-md0-i386-xen.img -d ~user/grub/grub-core ~user/grub/grub-core/*.mod
A more generic guide about generating Xen Grub images can be found at Using Grub 2 as a bootloader for Xen PV guests.
Define the virtual machine
virt-install can also be used to import disk (images) as Xen virtual machines. This works perfectly fine for e.g. importing Atomic Host raw images, but unfortunately the setup now is a bit more complicated. It requires me to specify the custom Grub 2 image as guest kernel for which I couldn’t find a corresponding virt-install argument (using v1.3.0). Therefore I first created a plain Xen xl configuration file and then converted it to a libvirt XML file. The initial configuration /etc/xen/myhost.cfg looks as following:
name = "myhost" kernel = "/var/lib/libvirt/images/grub-md0-i386-xen.img" memory = 1024 vcpus = 2 vif = [ 'bridge=virbr0' ] disk = [ '/dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4527214,raw,xvda,rw', '/dev/disk/by-id/ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J4366412,raw,xvdb,rw' ]
Eventually this configuration can be converted to a libvirt domain XML with the following command:
# virsh domxml-from-native --format xen-xl --config /etc/xen/myhost.cfg > /etc/libvirt/libxl/myhost.xml # virsh --connect xen:/// define /etc/libvirt/libxl/myhost.xml
At the end, I successfully converted the bare metal host to a KVM and/or Xen virtual machine. Once again Xen ended up being a bit more troublesome, but no nasty hacks were required and both configurations were quite straight forward. The boot loader setup for Xen could definitely be a bit simpler, but still it’s nice to see how well Grub 2 and Xen play together now. It’s also pleasing to see, that no virtualization specific configurations had to be made within the guest installation. The fact that the disk device names were changing from /dev/sd[ab] (bare metal) to /dev/vd[ab] (KVM) and /dev/xvd[ab] (Xen) was completely absorbed by the mdadm layer, but even without software RAID the /etc/fstab entries are nowadays generated by the installers in a way, that such migrations are easy possible.
Thanks for reading. If you have some comments, don’t hesitate to write a note below.