Introduction
I've blogged a few of times about how Dracut and QEMU can be combined to greatly improve Linux kernel dev/test turnaround.
- My first post covered the basics of building the kernel, running dracut, and booting the resultant image with qemu-kvm.
- A later post took a closer look at network configuration, and focused on bridging VMs with the hypervisor.
- Finally, my third post looked at how this technique could be combined with Ceph, to provide a similarly efficient workflow for Ceph development.
Usage - Standalone Linux VM
The following procedure was tested on openSUSE Leap 42.3 and SLES 12SP3, but should work fine on many other Linux distributions.
Step 1: Checkout and Build
Checkout the Linux kernel and Rapido source repositories:
~/> cd ~ ~/> git clone https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ~/> git clone https://github.com/rapido-linux/rapido.git
Build the kernel (using a config provided with the Rapido source):
~/> cp rapido/kernel/vanilla_config linux/.config ~/> cd linux ~/linux/> make -j6 ~/linux/> make modules ~/linux/> INSTALL_MOD_PATH=./mods make modules_install
Step 2: Configuration
Install Rapido dependencies: dracut and qemu.
Create a master rapido.conf configuration file using the example template:
~/linux/> cd ~/rapido ~/rapido/> cp rapido.conf.example rapido.conf ~/rapido/> vi rapido.conf
- set KERNEL_SRC="/home/<user>/linux"
- set KERNEL_INSTALL_MOD_PATH="${KERNEL_SRC}/mods"
- the remaining options can be left as is for now
Step 3: Image Generation
Generate a minimal Linux VM image which includes binaries, libraries and kernel modules for filesystem testing:
~/rapido/> ./cut_fstests_local.sh ... dracut: *** Creating initramfs image file 'initrds/myinitrd' done *** ~/rapido/> ls -lah initrds/myinitrd -rw-r--r-- 1 ddiss users 30M Dec 13 18:17 initrds/myinitrd
Step 4 - Boot!
~/rapido/> ./vm.sh + mount -t btrfs /dev/zram1 /mnt/scratch [ 3.542927] BTRFS info (device zram1): disk space caching is enabled ... btrfs filesystem mounted at /mnt/test and /mnt/scratch rapido1:/#
In a whopping four seconds, or thereabouts, the VM should have booted to a rapido:/# bash prompt. Leaving you with two zram backed Btrfs filesystems mounted at /mnt/test and /mnt/scratch.
Everything, including the VM's root filesystem, is in memory, so any changes will not persist across reboot. Use the rapido.conf QEMU_EXTRA_ARGS parameter if you wish to add persistent storage to a VM.
Once you're done playing around, you can shutdown:
rapido1:/# shutdown [ 267.304313] sysrq: SysRq : sysrq: Power Off rapido1:/# [ 268.168447] ACPI: Preparing to enter system sleep state S5 [ 268.169493] reboot: Power down + exit 0
Step 5: Network Configuration
The fstests_local VM above is networkless, so doesn't require bridge network configuration. For VMs that do (e.g. CephFS client below) edit rapido.conf:
- set TAP_USER="<user>"
- set MAC_ADDR1 to a valid MAC address, e.g. "b8:ac:24:45:c5:01"
- set MAC_ADDR2 to a valid MAC address, e.g. "b8:ac:24:45:c5:02"
Configure the isolated bridge and tap network devices. This must be done as root:
~/rapido/> sudo tools/br_setup.sh ~/rapido/> ip addr show br0 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 ... inet 192.168.155.1/24 scope global br0
Usage - Ceph vstart.sh cluster and CephFS client VM
This usage guide builds on the previous standalone Linux VM procedure, but this time adds Ceph to the mix. If you're not interested in Ceph (how could you not be!) then feel free to skip to the next section.Step I - Checkout and Build
We already have a clone of the Rapido and Linux kernel repositories. All that's needed for CephFS testing is a Ceph build:
~/> git clone https://github.com/ceph/ceph ~/> cd ceph <install Ceph build dependencies> ~/ceph/> ./do_cmake.sh -DWITH_MANPAGE=0 -DWITH_OPENLDAP=0 -DWITH_FUSE=0 -DWITH_NSS=0 -DWITH_LTTNG=0 ~/ceph/> cd build ~/ceph/build/> make -j4
Step II - Start a vstart.sh Ceph "cluster"
Once Ceph has finished compiling, vstart.sh can be run with the following parameters to configure and locally start three OSDs, one monitor process, and one MDS.
~/ceph/build/> OSD=3 MON=1 RGW=0 MDS=1 ../src/vstart.sh -i 192.168.155.1 -n ... ~/ceph/build/> bin/ceph -c status ... health HEALTH_OK monmap e2: 1 mons at {a=192.168.155.1:40160/0} election epoch 4, quorum 0 a fsmap e5: 1/1/1 up {0=a=up:active} mgr no daemons active osdmap e10: 3 osds: 3 up, 3 in
Step III - Rapido configuration
Edit rapido.conf, the master Rapido configuration file:
~/ceph/build/> cd ~/rapido ~/rapido/> vi rapido.conf
- set CEPH_SRC="/home/<user>/ceph/src"
- KERNEL_SRC and network parameters were configured earlier
Step IV - Image Generation
The cut_cephfs.sh script generates a VM image with the Ceph configuration and keyring from the vstart.sh cluster, as well as the CephFS kernel module.
~/rapido/> ./cut_cephfs.sh
... dracut: *** Creating initramfs image file 'initrds/myinitrd' done ***
Step V - Boot!
Booting the newly generated image should bring you to a shell prompt, with the vstart.sh provisioned CephFS filesystem mounted under /mnt/cephfs:
~/rapido/> ./vm.sh
...
+ mount -t ceph 192.168.155.1:40160:/ /mnt/cephfs -o name=admin,secret=... [ 3.492742] libceph: mon0 192.168.155.1:40160 session established ... rapido1:/# df -h /mnt/cephfs Filesystem Size Used Avail Use% Mounted on 192.168.155.1:40160:/ 1.3T 611G 699G 47% /mnt/cephfsCephFS is a clustered filesystem, in which case testing from multiple clients is also of interest. From another window, boot a second VM:
~/rapido/> ./vm.sh
Further Use Cases
Rapido ships with a bunch of scripts for testing different kernel components:
- cut_cephfs.sh (shown above)
- Image: includes Ceph config, credentials and CephFS kernel module
- Boot: mounts CephFS filesystem
- cut_cifs.sh
- Image: includes CIFS (SMB client) kernel module
- Boot: mounts share using details and credentials specified in rapido.conf
- cut_dropbear.sh
- Image: includes dropbear SSH server
- Boot: starts an SSH server with SSH_AUTHORIZED_KEY
- cut_fstests_cephfs.sh
- Image: includes xfstests and CephFS kernel client
- Boot: mounts CephFS filesystem and runs FSTESTS_AUTORUN_CMD
- cut_fstests_local.sh (shown above)
- Image: includes xfstests and local Btrfs and XFS dependencies
- Boot: provisions local xfstest zram devices. Runs FSTESTS_AUTORUN_CMD
- cut_lio_local.sh
- Image: includes LIO, loopback dev and dm-delay kernel modules
- Boot: provisions an iSCSI target, with three LUs exposed
- cut_lio_rbd.sh
- Image: includes LIO and Ceph RBD kernel modules
- Boot: provisions an iSCSI target backed by CEPH_RBD_IMAGE, using target_core_rbd
- cut_qemu_rbd.sh
- Image: CEPH_RBD_IMAGE is attached to the VM using qemu-block-rbd
- Boot: runs shell only
- cut_rbd.sh
- Image: includes Ceph config, credentials and Ceph RBD kernel module
- Boot: maps CEPH_RBD_IMAGE using the RBD kernel clien
- cut_samba_cephfs.sh
- Image: includes Ceph vstart config, credentials and libcephfs from CEPH_SRC, and additionally pulls in Samba from a (pre compiled) SAMBA_SRC
- Boot: configures smb.conf with a CephFS backed share and starts Samba
- cut_samba_local.sh
- Image: includes local kernel filesystem utils, and pulls in Samba from SAMBA_SRC
- Boot: configures smb.conf with a zram backed share and starts Samba
- cut_tcmu_rbd_loop.sh
- Image: includes Ceph config, librados, librbd, and pulls in tcmu-runner from TCMU_RUNNER_SRC
- Boot: starts tcmu-runner and configures a tcmu+rbd backstore exposing CEPH_RBD_IMAGE via the LIO loopback fabric
- cut_usb_rbd.sh (see https://github.com/ddiss/rbd-usb)
- Image: usb_f_mass_storage, zram, dm-crypt, and RBD_USB_SRC
- Boot: starts the conf-fs.sh script from RBD_USB_SRC
Conclusion
- Dracut and QEMU can be combined for super-fast Linux kernel testing and development.
- Rapido is mostly just a glorified wrapper around these utilities, but does provide some useful tools for automated testing of specific Linux kernel functionality.
If you run into any problems, or wish to provide any kind of feedback (always appreciated), please feel free to leave a message below, or raise a ticket in the Rapido issue tracker.
Update 20170106:
- Add cut_tcmu_rbd_loop.sh details and fix the example CEPH_SRC path.
- Use KERNEL_INSTALL_MOD_PATH instead of an ugly symlink
- Update Github links to refer to new project URL
- Remove old brctl and tunctl dependencies
- Split network setup into a separate section, as fstests_local VMs are now networkless
- Add cut_samba_cephfs.sh and cut_samba_local.sh details
No comments:
Post a Comment
Comments are moderated due to spammer abuse.