Gentoo Wiki


Split-arrows.gifIt has been suggested that this article be split into multiple articles accessible from a disambiguation or index page.   (Discuss)



Xen is an open-source para-virtualizing virtual machine monitor (VMM), or 'hypervisor', for the x86 processor architecture. Xen can securely execute multiple virtual machines on a single physical system with close-to-native performance. Xen facilitates enterprise-grade functionality, including:

Example Usage scenarios for Xen

Server Consolidation 
Move multiple servers onto a single physical host with performance and fault isolation provided at the virtual machine boundaries.
Hardware Independence 
Allow legacy applications and operating systems to exploit new hardware.
Multiple OS configurations 
Run multiple operating systems simultaneously, for development or testing purposes.
Kernel Development 
Test and debug kernel modifications in a sand-boxed virtual machine -- no need for a separate test machine.
Cluster Computing 
Management at VM granularity provides more flexibility than separately managing each physical host, but better control and isolation than single-system image solutions, particularly by using live migration for load balancing.
Hardware support for custom OSes 
Allow development of new OSes while benefiting from the wide-ranging hardware support of existing OSes such as Linux.

Xen Overview

These are the key items that you will need when setting up your system to use Xen.

Each VM you run is called a Domain in Xen terms. Domain 0 (aka Dom0) is the master domain and replaces your normal Linux kernel. Through it you use the management tools to control other VMs. Other domains are unprivileged and are termed Domain U or DomU.

A key point to remember is that Xen requires the DomU systems to use special drivers to access hardware. Dom0 manages the hardware and its drivers act as a "backend" and manage access to the actual hardware.

Remember that the Xen 3.0 User Manual provides a large amount of authoritative information. It will help you understand many things about Xen that are not described in this HOWTO.

From a standard Gentoo system you will need to do the following to start with Xen:

The Dom0 kernel will effectively replace your normal Linux kernel and will reuse the environment that you already had setup.

Once you have your system running Xen and Dom0 you can start configuring various DomUs.

Note: Client kernels (DomU) must exist in the Dom0 filesystem in order for Xen to startup the VM. Xen is presently unable to read and boot a kernel from within a client disk image. While this has the advantage that a DomU kernel can be shared between VMs, it makes the setup a bit harder.

In the simple case your DomU OS will be the same as your Dom0 OS but running off a different file system. In this case you can make your kernel configuration identical except for the Xen specific drivers.

Preparing the System


Ensure the system is running the latest Gentoo profile (currently 2008.0). Using the latest profile will ensure you're using a recent version of glibc with nptl. You can use any 2006.1 or later profile (including desktop sub profile).

You can check what profile the system is using by checking the result of running: eselect profile list

Code: Example profile list
Available profile symlink targets:
  [1]   default-linux/x86/2007.0 *
  [2]   default-linux/x86/no-nptl
  [3]   default-linux/x86/no-nptl/2.4
  [4]   default-linux/x86/2007.0/desktop
  [5]   hardened/x86/2.6
  [6]   selinux/x86/2007.0

The currently selected profile is displayed with an asterisk (*) next to it. In the above example the selected profile is default-linux/x86/2007.0.

If the system is not showing any recent profiles, then you need to update your local portage repository by running: emerge --sync

For details on how to change the system profile, see the Gentoo Upgrading Guide

Note: If you plan to add hvm support on amd64, you will need to have multilib support. As long as you DON'T have "no-multilib" as your profile you're fine, as all other profiles support multilib. This gets confusing as "emerge -pv gcc glibc" will always show (-multilib) as a USE option. It just means the the option is locked, but does not indicate what it's set to. The only way I found to check this is to run "gcc -v" and check for the string "--enable-multilib" in the output. Another catch, once you change your profile to "no-multilib" there is apparently no way back.
Note: If the xen-sources kernel is significantly older than gentoo-sources kernel and coreutils is up to date, you may need to downgrade coreutils before installing xen-sources and xen-tools. Currently xen-sources is at 2.6.21, gentoo-sources is at 2.6.25, xen-tools is at 3.2.1, and coreutils must be downgraded from 6.12 to 6.11. (Networking will be strangely broken if you do not).
See this post for more info: [1]
Code: Example profile list
Available profile symlink targets:
  [1]   default-linux/amd64/2006.1
  [2]   default-linux/amd64/2006.1/desktop
  [3]   default-linux/amd64/2006.0/no-symlinks
  [4]   default-linux/amd64/2006.1/no-multilib
  [5]   default-linux/amd64/2007.0 *
  [6]   default-linux/amd64/2007.0/desktop
  [7]   default-linux/amd64/2007.0/no-multilib
  [8]   default-linux/amd64/2007.0/server
  [9]   hardened/amd64
  [10]  hardened/amd64/multilib
  [11]  selinux/2007.0/amd64
  [12]  selinux/2007.0/amd64/hardened


Some software, in particular the glibc TLS library, is implemented in a way that will conflict with how Xen uses segment registers to circumvent a limitation of 32-bit x86 hardware platforms, causing poor performance whilst carrying out certain operations under Xen. This will result in a ~50% performance penalty running multi-threaded applications. To fix this, you must compile your system with the '-mno-tls-direct-seg-refs' flag.

Edit your /etc/make.conf and add '-mno-tls-direct-seg-refs' to your CFLAGS. This is similar to the Xen instructions to "mv /usr/lib/tls /usr/lib/tls.disabled", but instead removes the trapped (slow) opcodes for every binary, not just glibc. If using the -Os flag (with any <gcc-4), change it to -O2, as the compiler is known to produce broken code otherwise.

Note: The '-mno-tls-direct-seg-refs' flag does not make sense on any 64bit system. For such systems you can skip the recompilation of the whole world and just recompile glibc

You will also need to fix the CFLAGS for each domain you install. In practice, however, you will do this only once and save the result as your 'skeleton base' for all your domain Us. Following this articles method of using binary packages built by the host will also save you time.

nptlonly USE flag

Note: From glibc-2.6, nptl is always enabled and the nptl and nptlonly USE flags are no longer available. So you can safely skip this section if you have glibc-2.6 or higher.

The system must be using the nptlonly USE flag. To check whether this USE flag is currently enabled, run: emerge -pv glibc

If it shows the nptlonly USE flag as being off, then you need to add it to your global USE flags in /etc/make.conf

Activating buildpkg

Your system is about to be rebuilt entirely, but to save time later (when building the domU installs, assuming you're going to install Gentoo on them), activate the buildpkg portage feature by adding it to FEATURES in /etc/make.conf by running:
echo 'FEATURES="${FEATURES} buildpkg"' >> /etc/make.conf

This feature tells portage to create a binary package from every package it compiles and stores it in /usr/portage/packages. For more information on this feature see man make.conf or man quickpkg

Applying Changes

Note that this step may take quite some time as it will recompile every package on your system.

Update the system by running: emerge -evat world

Warning! If you try to reboot your machine afterwards, you may experience a little problem: it may not boot any more. I just used grub to re-install grub's disk sectors and all was ok, again.

If you need an explanation of the flags used, run emerge --help or man emerge

Windows and other Unmodified Guests in domU (a.k.a. HVM Guests)

If you have a processor with Intel Virtualization Technology (VT, previously known as Vanderpool) or AMD Secure Virtual Machine (SVM, previously known as Pacifica) technology, you can run unmodified guest operating systems like Windows XP, unmodified Linux distributions, *BSD, Solaris x86, etc. Processors with hardware virtualization capability include the Pentium D 9x0 series, Intel Core, Intel Core 2 and many AMD AM2 CPUs. (check for vmx flag (intel) and svm flag (amd) in /proc/cpuinfo!)

Before installing xen and xen-tools you will need to add hvm to your USE Flags. This is at least required with the current xen-3.1.2 ebuilds.

More information can be found at Xen: MS Windows systems as guest.

Note: If you intend to run Windows (or any other guest that needs to think it has access to "real VGA" hardware) in DomU, then you need to make sure that you add sdl to your USE Flags and emerge media-libs/libsdl before you emerge xen and xen-tools (recent ebuilds have 'sdl' use flag for xen).

Note: For HVM guests to be able to start, it is important that you create the appropriate tap device(s). Thus, the sys-apps/usermode-utilities package is required (tunctl), and support for TUN/TAP devices must be activated in the kernel configuration. Kernel Location: ->Device Drivers -> Network device support

Note: For Windows HVM guests to be able to track mouse cursor correctly, it is important that you set the line: usbdevice=tablet in your guest config file. For linux HVM see the xen 3.0 docs, Appendix A.4.3:

Building the hypervisor and applications

Xen is still ~arch masked. Unmask it:

File: /etc/portage/package.keywords

On amd64 you will also need to add sys-devel/dev86 if you plan on using hvm.

Install the hypervisor and applications by running: emerge -av app-emulation/xen app-emulation/xen-tools

Add the xen daemon to the default runlevel with: rc-update add xend default

The xen ebuild installs the hypervisor (/boot/xen.gz), while the xen-tools package installs both the xend daemon for controlling the virtual machines, and various command line tools.

To configure the network, make your changes in /etc/conf.d/net but DO NOT add net.eth0 to runlevel default. /etc/init.d/xend will start and configure your network at boot time. (While testing the initial kernel build on a machine on a remote net connection it may be useful to leave net.eth0 enabled and NOT autostart xend)

Newer Gentoo automaticly loads net.eth0 even if it is disabled in the default runlevel. You can disable this behaviour by changing RC_PLUG_SERVICES variable in /etc/conf.d/rc:

File: /etc/conf.d/rc

Building the kernel

There are two ways to build the kernel. You can do it manually, or you can have sys-kernel/genkernel do it for you. Genkernel will also build an initrd for you, which is where you activate LVM, EVMS and DMRAID volumes.

Install the Xen kernel sources with emerge sys-kernel/xen-sources. In /usr/src/linux-2.6.x.y-xen you will now find the sources required to build the kernel for a Xen domain.

It is recommended to build two separate kernels, one for domain 0, and one for domain U. You can use modules, but all drivers required to boot must be builtin.

Manually building the kernel

The Xen kernel can be difficult to configure - there are many options, some of which will cause your dom0 or domU kernels to fail on booting (eg. with errors opening the root device).

Separating dom0 and domU

So the xen-sources ebuilds only install one copy of the kernel-sources, but you have 2 separate configurations to maintain: one for dom0 and one for domU. So you might want to have 2 different ".config" files and two different trees of compiled binaries.

This can be achieved with the following aliases defined in:

File: ~/.bash_profile
alias make0="mkdir -p _dom0 && make O=_dom0"
alias makeU="mkdir -p _domU && make O=_domU"

From now on, you use "make0" or "makeU" instead of "make". For example "makeU menuconfig" will create the directory _domU and will store the ".config"-file in that subdirectory. "makeU all" will compile your domU kernel, and will store all binaries in that subdirectory. The same applies for make0 and the directory _dom0.

That way, you can manage both configuration with only one copy of the sources.

Now, for easy upgrading from one kernel to another, we create the script ~/bin/copy-config with the following content:

File: ~/bin/copy-config


mkdir -p "${DIR0}" "${DIRU}" && \
cp "$DIR/${DIR0}/.config" "${DIR0}/"
cp "$DIR/${DIRU}/.config" "${DIRU}/"

With that script, we can easily upgrade from let's say xen-sources-2.6.20-xen-r2 to xen-sources-2.6.20-xen-r3 with the following steps:

/usr/src # cd linux-2.6.20-xen-r3/
/usr/src/linux-2.6.20-xen-r3 # ~/bin/copy-config ../linux-2.6.20-xen-r2/
/usr/src/linux-2.6.20-xen-r3 # make0 oldconfig && makeU oldconfig

IMPORTANT: Since we modified the output directory of the kernel, we also need to inform portage about it so that we can emerge kernel modules. Add the following two lines to /etc/make.conf:

File: /etc/make.conf

Domain 0 Kernel Configuration

The domain 0 kernel should contain drivers for Xen backend devices, and all of your usual hardware. That is, the dom0 configuration should enable all the options for backend drivers and disable all options for building in the frontend drivers. The frontend driver configuration options will be used when building the domU kernel so take note of them for later. In effect the backend driver allows the dom0 to talk directly to the hardware. Conversely, the frontend driver is a stub driver allowing the domU to efficiently call through to the dom0 to ask its backend driver to do the actual work.

Ethernet bridging support is required in order to bridge domain U kernels to a domain 0 /dev/ethX device, as well as Network-device loopback driver. This is the default set by /etc/xen/scripts/vif-bridge when a domain is created.

An alternative is to use IP routing in domain 0 if you want to keep domain U isolated from the external ethernet.

To even get at the Xen configuration options, you must make an appropriate selection under Processor type and features:

Note: The "Enable Xen compatible kernel" flag is specific to x86_64 and ia64 architectures. In i386 you need to set the "Subarchitecture Type" to "Xen-compatible" instead.
Note: Do not select the "Subarchitecture Type" of "ScaleMP vSMP" as this setting is not compatible with Xen. [2]
Linux Kernel Configuration: Enabling Xen (64 bit)
Processor type and features  --->
      Subarchitecture Type (PC-compatible)
  [*] Enable Xen compatible kernel
Linux Kernel Configuration: Enabling Xen (32 bit)
Processor type and features  --->
  Subarchitecture Type (Xen-compatible)  --->

The configuration dialogue for the Xen kernel options has changed quite a bit since this tutorial was written. I'm posting the new stuff with a sample of a (not thoroughly tested) configuration based on this section and the notes added to it.

Linux Kernel Configuration: Dialogue For Xen (as of 2.6.20-xen-r4)
XEN --->
  [*] Privileged Guest (domain 0)
  <*> Backend driver support
  <*>   Block-device backend driver
  <*>   Block-device tap backend driver
  <*>   Network-device backend driver
  [ ]     Pipelined transmitter (DANGEROUS)
  <*>     Network-device loopback driver
  <*>   PCI-device backend driver
          PCI Backend Mode (Virtual PCI)  --->
  [ ]     PCI Backend Debugging
  < >   TPM-device backend driver
  < > Block-device frontend driver
  < > Network-device frontend driver
  < > Framebuffer-device frontend driver
  [*] Scrub memory before freeing it to Xen
  [*] Disable serial port drivers
  <*> Export Xen attributes in sysfs
      Xen version compatibility (no compatibility code)  --->
Linux Kernel Configuration: Dialogue For Xen (old)
  [*] Privileged Guest (domain 0)
  < > PCI device backend driver
  <*> Block-device backend driver
  < >   Block Tap support for backend driver (DANGEROUS)
  <*> Network-device backend driver
  [ ]   Pipelined transmitter (DANGEROUS)
  <*>   Network-device loopback driver
  < > TPM-device backend driver
  < > Block-device frontend driver
  < > Network-device frontend driver
  < > Block device tap driver
  < > TPM-device frontend driver
  [*] Scrub memory before freeing it to Xen
  [ ] Disable serial port drivers
  <*> Export Xen attributes in sysfs

Other options are the same as they were the day this tutorial was written.

Linux Kernel Configuration: Miscellaneous Dom0 Options
Networking --->
  Networking options --->
    TCP/IP networking
      <*> IP: tunneling
    <*> 802.1d Ethernet Bridging

Device Drivers  --->
  Block devices  --->
    <*> Loopback device support

Note: If you want to be able to let a DomU have direct access for specific hardware e.g. a soundcard you should activate XEN ---> < > PCI device backend driver.
Note: On Dell poweredge server you may need to disable USB support if your machine freezes constantly.
Note: If you won't get any output when you run xm console, activating "Disable serial port drivers" helped me.
Note: In Xen 3.0.4 support for VBDs (virtual block devices) has changed. To get this new support make sure to include "Block Tap support for backend driver" in your kernel configuration.

Now compile and install the Domain 0 kernel:

make0 && make0 modules_install
cp _dom0/vmlinuz /boot/vmlinuz-2.6.x.y-xen0
Note: You won't be able to make bzImage. The Xen hypervisor takes care of booting from the Domain 0 vmlinuz image.

Domain U Kernel Configuration

The domain U kernel should contain only Xen frontend drivers since it has no real hardware. It is recommended that only the Xen specific items are different between the Dom0 and DomU kernel configuration files.

Linux Kernel Configuration: DomU Configuration (as of 2.6.20-xen-r4)
Processor type and features  --->
  Subarchitecture Type (Xen-compatible)  --->

Bus options (PCI etc.)  --->
  [*] Xen PCI Frontend

XEN --->
  [ ] Privileged Guest (domain 0)
  < > Backend driver support
  <*> Block-device frontend driver
  <*> Network-device frontend driver
  <*> Framebuffer-device frontend driver
  <*>   Keyboard-device frontend driver
  [*] Scrub memory before freeing it to Xen
  [*] Disable serial port drivers
  <*> Export Xen attributes in sysfs
      Xen version compatibility (no compatibility code)  --->
Linux Kernel Configuration: DomU Configuration (old)
Processor type and features  --->
  [*] Enable Xen compatible kernel

XEN --->
  [ ] Privileged Guest (domain 0)
  [ ]  Block-device backend driver
  [ ]  Network-device backend driver
  [*] Block-device frontend driver
  [*] Network-device frontend driver
  [ ]   Piplined transmitter (DANGEROUS)
  [*] Disable serial port drivers
  [*] Scrub memory before freeing it to Xen
      Processor Type (X86)  --->
Note: You must enable framebuffer device support for the Framebuffer-device frontend driver option to appear]
Note: You may also need to disable SCSI in the domU kernel, as the block frontend needs to register that device ID space
Note: Disabling the serial port drivers allows Dom0 to attach to a running unprivileged domain's serial port

Now compile and install the Domain U kernel:

cp _domU/vmlinuz /boot/vmlinuz-2.6.x.y-xenU

At the moment Xen can't boot from kernel images stored inside virtual machines, so you need to store them inside the domain 0 virtual machine. In this example they are stored in /boot/ but since they aren't necessary to boot domain 0 you can put them anywhere in the Dom0 filesystem.

Only one domain 0 kernel

Most of this is taken from the Xen Wiki. Except the stuff about depmod.

Many users will be better off using the "-xen" kernel instead of "-xen0" and "-xenU" kernels. The -xen0/U kernels are used to achieve faster compile times in the dev process. Each kernel can be compiled independently and since only a small subset of kernel components are compiled the overall process can save a great deal of developer time. The -xen kernel is more like the kernels that come with many distributions (Redhat/Fedora, SuSE, Debian Etch). These kernels ship with a large number of components, like drivers for devices and file systems, compiled as modules. This allows these kernels to run on more hardware than the type of stripped down custom kernel you would find on an appliance. The -xen kernel will take longer to compile and will require a initrd but once built it will work on more hardware and "play well" with more distributions. Many of the recent problems reported on the user list would have been avoided by using the -xen kernel.

To build the -xen kernel edit the top level Makefile so that this line:

KERNELS ?= linux-2.6-xen0 linux-2.6-xenU

looks like this

KERNELS ?= linux-2.6-xen

The line mentioned above does not appear in a recent source tree:

  # pwd
  # find . -name 'Makefile' | xargs egrep 'KERNELS'
  # echo $?

Then build with:

make world

You will get a single kernel and modules which can be used for both Domain0 and all DomainUs. Copy the modules directory /lib/modules/2.6.<version>-xen to the /lib/modules directory of your VM and make an initrd with mkinitrd but first you have to create the dependencies for the module:

depmod 2.6.<version>-xen

Next run mkinitrd as explained (mkinitrd is currently masked ~ in amd64, I merged it anyway and it "seems" to works):

mkinitrd /boot/initrd-xen-3.0.img 2.6.<version>-xen

You will need to add the initrd to your grub config under the -xen kernel line. It looks something like this (more on grub.conf below. Please also see the note on gunzipping the generated image below):

module /vmlinuz-2.6-xen <your config here>
module /initrd-xen-3.0.img

The same initrd can be used for the VM by adding the following to its config file.

ramdisk = "/boot/initrd-xen-3.0.img"

Using genkernel

There are a few kinks to work out if you wish to use genkernel to generate your kernel and initrd images.

emerge genkernel

When building >=sys-kernel/xen-sources-2.6.16 the current version of genkernel (3.4.0) fails due to a change in Xen that genkernel hasn't been updated to deal with. Fortunately, there's an easy fix from bug #120236:

wget -O x86-xen0.tar.bz2
tar -xvjf x86-xen0.tar.bz2 -C /usr/share/genkernel/
rm x86-xen0.tar.bz2

The patch is not perfect. If you have trouble while booting your kernel and get a message saying that the switch_root applet could not be found just execute

echo "CONFIG_SWITCH_ROOT=y" >> /usr/share/genkernel/x86-xen0/busy-config

to enable this applet. After that you have to rebuild your initial ramdisk.

You might need to adjust your /usr/src/linux symlink in order for genkernel to choose the right kernel.

Set the following options in genkernel.conf:

File: /etc/genkernel.conf

# Force genkernel-3.4 to use the arch-specific kernel config file from /usr/share/genkernel/${ARCH}

# Bootsplash doesn't work with Xen

# Force use of Xen-specific genkernel profile in /usr/share/genkernel/x86-xen0

# Necessary if you previously encountered the "mdev not found" error

Now run genkernel all to build and install your kernel and initrd. You might need extra arguments to genkernel if you're using EVMS, LVM, DMRAID or similar - refer to man genkernel.

When the menu configuration pops up, you'll want to:

If it still won't compile (happened for me on AMD64 with xen-sources- read this Bugreport:

You need to choose a good place to put the domU vmlinuz images. At the moment Xen can't boot from kernel images stored inside virtual machines, so you need to store them in dom0. I just put them in /boot/, but since they aren't necessary to boot dom0 you can put them anywhere.

hint: if you use the current genkernel, dont forget to unzip the initrd and copy vmlinuz from /usr/src/linux as kernel to /boot. genkernel still produces bzimage and gzipped initrd! /nemster 9/9/2007

Updating your boot loader


The hypervisor is installed into /boot/xen.gz. It is booted in the same way as a kernel bzImage. Edit your GRUB config (you can just modify your old entry, replace kernel with module and add a kernel line pointing to xen.gz):

File: /boot/grub/grub.conf
title  Xen 3.0 / Linux 2.6.x.y
root   (hd0,0)
kernel /xen.gz dom0_mem=98M
module /vmlinuz-2.6.x.y-xen0 root=/dev/md2

The dom0_mem hypervisor option sets the amount of memory to be allocated to domain0 (in this case 98MB). In Xen 3.x the parameter may be specified with a B, K, M or G suffix, representing bytes, kilobytes, megabytes and gigabytes respectively; if no suffix is specified, the parameter defaults to kilobytes. Note: 98M lead to insufficient memory errors with

Note: For xen 3 the dom0_mem parameter may be omitted. In this case all memory is allocated to dom0, and then taken away when domU's are created. This is probably a better technique to use for xen 3

The module line is used to select the domain 0 kernel image you want the hypervisor to run, and to pass in options to the kernel command line.

If your domain 0 uses an initrd, you can load that by adding another module line (xen wont work with genkernel initrd images. You literally need to gunzip then gzip the initrd file again to get it to boot. This is because the default image has a few bytes of garbage beyond the end of the file). For example to boot a non-enforcing SELinux system with EVMS on the root disk then try:

File: /boot/grub/grub.conf
title  Xen 3.0 / Linux
root   (hd0,0)
kernel /xen.gz dom0_mem=98M
module /vmlinuz-x86- root=/dev/ram0 real_root=/dev/evms/sda5 udev doevms2 selinux=1 enabled=0
module /initramfs-genkernel-x86-

Alternative: LILO

For those who use lilo, which does not support the "module" directive of grub, there is still a way of achieving the desired functionality. An utility called mbootpack has to be used, in order to glue together the xen hypervisor with the dom0 kernel and the initrd image.

Initially, the xen hypervisor and dom0 kernel images have to be decompressed:

gzip -dc /boot/xen-3.0.gz > /boot/xen-3.0
gzip -dc /boot/vmlinuz-2.6.x.y-xen0 > /boot/vmlinux-2.6.x.y-xen0

Afterwards, combine these two with the initrd image with the aid of mbootpack:

cd /boot
mbootpack -o vmlinuz-2.6.x.y-xen0-mpack -m vmlinux-2.6.x.y-xen0 -m initrd-2.6.x.y-xen.img xen-3.0

You should now have a compatible bzImage containing the xen hypervisor, the xen dom0 kernel and the initrd. Lastly, update your lilo configuration by adding the appropriate entry:

File: /etc/lilo.conf

And lastly, don't forget to run the lilo command to update the changes.

Alternative: PXELinux

Network booting (possibly with nfsroot file system) can ease setup and maintenance in some environments, such as a homogeneous cluster.

The Xen hypervisor and a domain 0 kernel can be booted using PXE. sys-boot/syslinux contains the PXELinux boot program; support for booting xen has been present since syslinux-3.08.

Follow the instructions in HOWTO Gentoo Diskless Install and Diskless Nodes with Gentoo to set up a boot server running dhcp and tftp.

You need to serve the following via tftp:

In your pxelinux config file add a single line like:

File: /diskless/pxelinux.cfg/default
DEFAULT mboot.c32 xen.gz dom0_mem=258048 --- vmlinuz- ro console=ttyS0 root=/dev/nfs --- initrd-

The three dashes --- are important and used to seperate the different modules.

You can omit the --- initrd- bit if you aren't using a ram disk for modules. Also you can use a hard disk rather than nfsroot by changing the root= to point to a block device (eg. root=/dev/hda, or root=/dev/md2 for raid).

Configure the BIOS of your Xen host to boot from the network via PXE (this can be well hidden - on a Dell PowerEdge server I had to enable Onboard Devices -> NIC w/PXE and reboot before Network Controller appeared in the Boot Sequence menu).

On booting you should see the BIOS screen, followed by the PXE loader doing DHCP and fetching PXELinux, then PXELinux booting and fetching the hypervisor and kernel, then the hypervisor booting, and finally the kernel booting and mounting the nfsroot fs from the server. Phew!

Running Xen

At this point the PC can be rebooted. Select your Xen option in grub. You should see the Xen hypervisor booting

Code: Xen hypervisor boot
 \ \/ /___ _ __   |___ / / _ \     __| | _____   _____| |
  \  // _ \ '_ \    |_ \| | | |__ / _` |/ _ \ \ / / _ \ |
  /  \  __/ | | |  ___) | |_| |__| (_| |  __/\ V /  __/ |
 /_/\_\___|_| |_| |____(_)___/    \__,_|\___| \_/ \___|_|
 University of Cambridge Computer Laboratory

 Xen version 3.0-devel ( (gcc version 3.3.6 (Gentoo 3.3.6, ssp-3.3.6-1.0, pie-8.7.8)) Tue Sep  6 17:30:34 BST 2005
 Latest ChangeSet:

(XEN) Physical RAM map:
(XEN)  0000000000000000 - 00000000000a0000 (usable)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 000000003fe8cc00 (usable)
(XEN)  000000003fe8cc00 - 000000003fe8ec00 (ACPI NVS)
(XEN)  000000003fe8ec00 - 000000003fe90c00 (ACPI data)
(XEN)  000000003fe90c00 - 0000000040000000 (reserved)
(XEN)  00000000f0000000 - 00000000f4000000 (reserved)
(XEN)  00000000fec00000 - 00000000fed00400 (reserved)
(XEN)  00000000fed20000 - 00000000feda0000 (reserved)
(XEN)  00000000fee00000 - 00000000fef00000 (reserved)
(XEN)  00000000ffb00000 - 0000000100000000 (reserved)

Once the Hypervisor has loaded, it will boot your kernel. You should see something like:

Code: Xen hypervisor boot
(XEN) Scrubbing Free RAM: ...........done.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen).
Linux version ( (gcc version 3.3.6 (Gentoo 3.3.6, ssp-3.3.6-1.0, pie-8.7.8)) #2 Tue Sep 6 18:30:28 BST 2005
BIOS-provided physical RAM map:
 Xen: 0000000000000000 - 0000000006000000 (usable)
96MB LOWMEM available.
On node 0 totalpages: 2

Followed by the usual kernel boot messages. Log in as normal. Congratulations, you now have a domain 0 Xen kernel up and running!

You need to activate the Xen control daemon to start the domUs.

  /etc/init.d/xend start 
 * Starting Xen control daemon ...

Creating DomUs

Using pre-built OS images

One of the easiest methods of creating domUs is to use a pre-built image - a single file that contains a filesystem and an already installed operating system.

With an existing image, you only need to create the Xen domU configuration file and supply a domU kernel (which can be re-used between several domUs). provides pre-built Xen images for a range of common Linux distributions, including CentOS, Debian, Fedora Core, Slackware, and Gentoo.

Using app-emulation/domi

app-emulation/domi is a set of shell scripts from Gerd Knorr that can be used to build Suse, Fedora, Debian, Ubuntu and Gentoo domUs. domi creates a virtual disk on either a regular file or new LVM2 logical volume.

domi creates Fedora and CentOS domUs using sys-apps/yum, Debian and Ubuntu domUs using debootstrap, and Gentoo domUs using the regular chroot-style stage3 Gentoo install.

Unmask domi and its dependencies:

File: /etc/portage/package.keywords
emerge app-emulation/domi
Note: DomUs have a default password of "secret" for the root user.

Example: File-backed Debian Sarge domU

Settings can be passed to domi as environment variables (ie. NAME=value domi) or through a file passed as the first argument to domi (ie. domi config-file). We'll use a config file:

File: debian-test.cfg
domi debian-test.cfg
Code: domi debian-test.cfg
### debian-test: initialization (i386)

### debian-test: setup disk (sparse file /var/xen/debian-domU.img)
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.003593 seconds, 292 MB/s
Disk geometry for /dev/loop/1: 0kB - 4295MB
Disk label type: msdos
Number  Start   End     Size    Type      File system  Flags
1       1kB     4026MB  4026MB  primary                boot
2       4026MB  4294MB  268MB   primary
add map 1-part1 : 0 7863281 linear /dev/loop/1 1
add map 1-part2 : 0 522648 linear /dev/loop/1 7863282

### debian-test: setup root fs and swap
Label was truncated.
Setting up swapspace version 1, size = 267591 kB
LABEL=debian-test-swa, UUID=0328303e-2634-48b9-ace5-1cff6ff95cc2

### debian-test: copy cached debs [/var/cache/domi/debian-sarge]

### debian-test: fetching debootstrap from
15:23:05 URL: [2107] -> ".listing" [1]
15:23:05 URL: [51554] -> "debootstrap_0.1.17.7woody1_i386.deb" [1]
15:23:05 URL: [72236] -> "debootstrap_0.2.45-0.2_i386.deb" [1]

FINISHED --15:23:05--
Downloaded: 125,897 bytes in 3 files

### debian-test: unpack /var/cache/domi/debian-sarge/debootstrap_0.2.45-0.2_i386.deb

### debian-test: bootstrap debian sarge from
I: Retrieving debootstrap.invalid_dists_sarge_Release
I: Validating debootstrap.invalid_dists_sarge_Release
I: Retrieving debootstrap.invalid_dists_sarge_main_binary-i386_Packages
I: Validating debootstrap.invalid_dists_sarge_main_binary-i386_Packages
I: Checking adduser...
I: Checking apt...
I: Checking apt-utils...


### debian-test: save downloaded debs [/var/cache/domi/debian-sarge]
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/1-part1    3869896    147060   3526256   5% /tmp/domi-6942/mnt

### debian-test: cleanup: virtual disk
/dev/mapper/1-part1 umounted
del devmap : 1-part1
del devmap : 1-part2

### debian-test: cleanup: remove tmp files

Now start the domU and attach to its console:

xm create debian-test -c
Code: xm create debian-test -c
Linux version (root@oak) (gcc version 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0, pie-8.7.9)) #12 Wed Jun 7 12:28:59 EST 2006
BIOS-provided physical RAM map:
 Xen: 0000000000000000 - 0000000008000000 (usable)
136MB LOWMEM available.
IRQ lockup detection disabled
Built 1 zonelists
Kernel command line:  ip=: root=/dev/xvda1 ro
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Initializing CPU#0
PID hash table entries: 1024 (order: 10, 16384 bytes)
Xen reported: 1665.426 MHz processor.
Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
Software IO TLB disabled
vmalloc area: c9000000-fbefa000, maxmem 33ffe000
Memory: 125364k/139264k available (2744k kernel code, 5556k reserved, 816k data, 164k init, 0k highmem)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Calibrating delay using timer specific routine.. 3330.63 BogoMIPS (lpj=16653169)
Mount-cache hash table entries: 512
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
CPU: L2 Cache: 256K (64 bytes/line)
CPU: AMD Athlon(tm)  stepping 01
Checking 'hlt' instruction... OK.
Grant table initialized
NET: Registered protocol family 16
PCI: setting up Xen PCI frontend stub
Generic PHY: Registered new driver
xen_mem: Initialising balloon driver.
PCI: System does not support PCI
PCI: System does not support PCI
Initializing Cryptographic API

testing md5
test 1:
test 2:


Setting the System Clock using the Hardware Clock as reference...
System Clock set. Local time: Wed Jun 14 06:52:44 UTC 2006

Initializing random number generator...done.
Recovering nvi editor sessions... done.
INIT: Entering runlevel: 2
Starting system log daemon: syslogd.
Starting kernel log daemon: klogd.
Starting MTA: exim4.
Starting internet superserver: inetd.
Starting deferred execution scheduler: atd.
Starting periodic command scheduler: cron.

Debian GNU/Linux 3.1 (none) tty1

(none) login:

Gentoo domU using quickpkg

The dom0 is a Gentoo system compiled specifically for Xen (through CFLAGS="-mno-tls-direct-seg-refs" in /etc/make.conf), and may even be built with your preferred CFLAGS and USE flags.

quickpkg allows you to create binary packages from an existing Gentoo system (such as your dom0), which can be combined with portage's support for alternate ROOTs to quickly create a Gentoo domU with the same CFLAGs as your dom0, without needing to recompile anything or perform a stage1 install.

If you don't already have it, install app-portage/gentoolkit, as we'll be using equery to list all packages currently installed on the system: emerge app-portage/gentoolkit

If you're not already using the buildpkg feature in portage, you'll need to create binary packages from your dom0 install using the following script:

for PKG in $(equery -q list | cut -d ' ' -f 3)
  quickpkg --include-config=y =$PKG

There is an example in the tips section of filtering out packages, using USE flags, categories and arbitrary name components.

Create storage for your domU, using your preferred method (eg. loopback file-based image, LVM2 logical volume, physical partition, or EVMS volume). Create a filesystem on this storage and mount it at /mnt/gentoo.

Example: 4GB sparse loopback file-based image with reiserfs filesystem.

# Create sparse file
dd if=/dev/zero of=/var/xen/domU-gentoo bs=1M count=4095

mkreiserfs -f /var/xen/domU-gentoo

mkdir -p /mnt/gentoo
mount -o loop /var/xen/domU-gentoo /mnt/gentoo

Next you need to create and initialize a swap space for use by domU. These instructions use a swap file, but you may want to consider using a seperate partition as a normal system would. For a discussion on the differences between the two, see The Linuxk-Kernel Mailing List: Swap partition vs swap file.

dd if=/dev/zero of=/mnt/gentoo/swap bs=1M count=256
mkswap /mnt/gentoo/swap

Download a stage3 tarball and unpack it into /mnt/gentoo/: tar -xvjpf stage3-i686-2008.0.tar.bz2 -C /mnt/gentoo/

Mount the proc and dev filesystems so they are available from within the chroot environment:

mount -t proc none /mnt/gentoo/proc
mount -o bind /dev /mnt/gentoo/dev

Copy the existing portage tree from dom0. This will also copy the binary packages built earlier, as they are stored in /usr/portage/packages: cp -av /usr/portage/ /mnt/gentoo/usr/

Copy dom0's make.conf and /etc/portage/ to domU, so we're using the correct CFLAGS, CHOST, USE flags etc.

cp /etc/make.conf /mnt/gentoo/etc/
cp -R /etc/portage /mnt/gentoo/etc/

Copy resolv.conf into the chroot environment to ensure we can use internet access: cp /etc/resolv.conf /mnt/gentoo/etc/

Make sure dom0 and domU are using the same profile, which is probably default-linux/x86/2008.0 (/desktop or /server)

rm /mnt/gentoo/etc/make.profile
ln -s ../usr/portage/profiles/default/linux/x86/2008.0/server /mnt/gentoo/etc/make.profile

Use binary packages to create a new Gentoo system in /mnt/gentoo/, overwriting config files from the stage3 tarball: ROOT=/mnt/gentoo/ CONFIG_PROTECT=-/etc FEATURES=-collision-protect emerge --usepkg --emptytree system

Warning: You may be tempted to install additional packages at this stage, such as net-misc/dhcp, but you MUST wait until inside the chroot environment! Portage's pkg_setup function does not work as expected when combined with ROOT, and certain packages (such as those using enewuser or enewgroup ) will not install correctly.

Chroot into the Gentoo domU and run the following. Don't worry about the ominous /usr/bin/python: error while loading shared libraries: cannot open shared object file: No such file or directory when running gcc-config, as that's what we're fixing here. It seems sys-devel/gcc doesn't play too well with portage's ROOT option, so after the above commands we have an invalid gcc profile selected.

chroot /mnt/gentoo
gcc-config -1
source /etc/profile

Using the same method as you do for a normal install, set the domU's timezone and hostname.

Install app-portage/gentoolkit, then use revdep-rebuild to remerge (from our binary packages) anything that still links against a package from the stage tarball.

emerge --usepkg app-portage/gentoolkit
revdep-rebuild --usepkg

Tip: You can use the --usepkg flag to emerge any package that was also installed on the dom0 system that the binary packages were created on as long as you want exactly the same USE flags.

Take care of any rebuilds or cleanups required by python and perl updates (using our binary packages, of course).

perl-cleaner all --usepkg

Install any non-system packages that will be required to boot the domU (eg. sys-fs/reiserfsprogs, net-misc/dhcpd): emerge --usepkg net-misc/dhcp sys-fs/reiserfsprogs

Set a password for the domU's root user using: passwd

Finally, exit the chroot environment with: exit

Create a Xen configuration file for our new domU. In this example, the loopback file /var/xen/domU-gentoo will be exposed to the domU as /dev/xvda.

File: /etc/xen/gentoo
# general
name    = "gentoo";
memory  = 256;

# booting
kernel  = "/boot/xen-domU";

# virtual harddisk
disk = [ "file:/var/xen/domU-gentoo,xvda,w" ];
root = "/dev/xvda ro";

# virtual network
vif = [ "" ];
dhcp = "dhcp";
Note: Note that the disk and root entries must both refer to this same virtual block device otherwise the boot attempt will fail. The domU cannot see the physical hardware attached to the dom0 unless it is defined on the disk line in this configuration file.

Edit domU's /etc/fstab to use the swapfile and Xen block device (/dev/xvda). domUs don't need a boot partition and don't have a CD-ROM or floppy drive, so remove those lines.

File: /mnt/gentoo/etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/xvda               /               reiserfs        noatime         0 1
/swap                   none            swap            sw              0 0

proc                    /proc           proc            defaults        0 0
shm                     /dev/shm        tmpfs           nodev,nosuid,noexec     0 0

Umount the domU filesystem: umount /mnt/gentoo/{dev,proc,}

Start the domU and attach to its console: xm create gentoo -c

Once the domU running, check that all necessary users for various services as ssh are present in /etc/passwd and /etc/group.

Tip: You can now create a stage4 tarball to use for future Gentoo domUs on the same dom0.

Autostart domU

If you want your domU to be started on system boot you have to create symlinks in /etc/xen/auto to your domU configuration file.

cd /etc/xen/auto
ln -s /etc/xen/gentoo

Add the xendomains daemon to the default runlevel with: rc-update add xendomains default

By Hand

It is possible to create domUs by hand, you can do so as you would normally create any other linux install. Once finished, load it the same way as ttylinux or when it's on disk look at /etc/xen/xmexample1 to see how to configure xen to use partitions instead of files.


It can be difficult to see what's going on as the system boots. The easiest way is to use another computer connected via the serial port. Another option is to use GNU screen.

Compile support for a serial console into your Xen kernel:

Linux Kernel Configuration: Console on serial port
Device drivers --->
  Character devices --->
    Serial drivers --->
      [*] Console on 8250/16550 and compatible serial port

Add something like the following to your grub.conf:

File: /boot/grub/grub.conf
title  Xen 2.0.8_pre20050826 / XenLinux
root   (hd0,0)
kernel /xen.gz dom0_mem=98304 noreboot com1=9600,8n1
module /vmlinuz- root=/dev/md2 noreboot console=ttyS0 debug

The noreboot option tells Xen not to allow the kernel to reboot, even if you do shutdown -r. The hypervisor option com1=9600,8n1 and the kernel option console=ttyS0 tell both to output boot messages to the serial port.

Xen says Linux dom0 is not a Xen-compatible Elf image

Verify that you have a xen_guest section in the resulting dom0 vmlinux.

readelf -a /boot/vmlinuz-2.6.x.y-xen0 | egrep xen_guest

Required Modules

On AMD64 I found that it was necessary to add 'loop' and 'tun' to modules.autoload.d or Xen/HVM networking would not work at all. Do this by manually adding these to /etc/modules.autoload.d/kernel-2.6 (or compile them into the kernel static instead of as modules)

Compiling Xen fails on kernel.c

If you have problems compiling xen (kernel.c fails to compile), try removing 387 support from your make.conf i.e remove 387 from the -mfpmath=sse,387 flag.

invalid option `no-tls-direct-seg-refs'

If GCC does not know the 'no-tls-direct-seg-refs' flag (cc1: error: invalid option `no-tls-direct-seg-refs'), you will need to upgrade to a newer version (3.4 works) as described in the Gentoo GCC Upgrading Guide

gcc & '-fno-stack-protector-all' error

Xen 3.0.2 has some problems with Hardended USE Flag. If you get error of gcc because of '-fno-stack-protector-all' option try remove hardended flag.

Warning about /lib/tls on boot

If on booting the system you see a warning similar to the following, it means you're not using a recent enough version of glibc (you need to be using atleast version 2.4). This should be fixed by upgrading your system by following the Gentoo Upgrading Guide.

Code: Xen /lib/tls Warning
  ** WARNING: Currently emulating unsupported memory accesses  **
  **          in /lib/tls glibc libraries. The emulation is    **
  **          slow. To ensure full performance you should      **
  **          install a 'xen-friendly' (nosegneg) version of   **
  **          the library, or disable tls support by executing **
  **          the following as root:                           **
  **          mv /lib/tls /lib/tls.disabled                    **
  ** Offending process: init (pid=1)                           **

System Freezing on Reboot

If you experience your system freezing when rebooting, this could be due to conflicts between the Xen and gentoo network setups. To solve the problem, edit /etc/conf.d/rc and change the RC_NET_STRICT_CHECKING setting to 'lo'.

For possible further information, see the Gentoo forum thread

log flood when using dhcpcd-3.0.16 or 17

When using dhcpcd-3.0.16 or 17 as DHCP client for domU, you might notice your messages log being flooded. Also the messages log on the dhcp server will be flooded with messages every three seconds. Typical messages are:

Code: dhcp client
Jun 27 21:10:03 prime dhcpcd[3762]: eth0: bad UDP checksum, ignoring
Jun 27 21:20:03 prime dhcpcd[3762]: eth0: bad UDP checksum, ignoring
Jun 27 21:30:00 prime dhcpcd[3762]: eth0: bad UDP checksum, ignoring
Jun 27 21:30:03 prime dhcpcd[3762]: eth0: bad UDP checksum, ignoring
Code: dhcp server
Jun 27 21:23:55 lucifer dhcpd: DHCPREQUEST for from 0a:70:72:69:6d:65 (prime) via eth0
Jun 27 21:23:55 lucifer dhcpd: DHCPACK on to 0a:70:72:69:6d:65 (prime) via eth0
Jun 27 21:23:58 lucifer dhcpd: DHCPREQUEST for from 0a:70:72:69:6d:65 (prime) via eth0
Jun 27 21:23:58 lucifer dhcpd: DHCPACK on to 0a:70:72:69:6d:65 (prime) via eth0

It can be solved by emerging dhcpcd-2.0.5-r1 on domU

--Jhendrix 20:10, 27 June 2007 (UTC)

Host (dom0) networking fails to start

The default Xen networking scripts are not particularly resilient to differing network configurations. A number of solutions were discussed on the forums ( and Manual configuration may often be completed (as picked from preceding links) with the commands similar to the following after the Xen daemon has started (mileage may vary):

 sleep 20
 /sbin/ifconfig eth0 down
 /sbin/ifconfig eth0 netmask
 /sbin/route add default gw

This may be hacked into startup via '/etc/conf.d/local.start', or alternatively, via the "more Gentoo configuration" detailed in the next section.

Dom0 network configuration (alternative solution)

A cleaner solution for the network in dom0. This approach uses the standard gentoo networking scripts.

First, configure networking in /etc/conf.d/net as usual:

Code: /etc/conf.d/net
 "setfd 0"
 "sethello 0"
 "stp off"


Once all the configurations are in place link into '/etc/init.d' as normal via the following command:

 sudo ln -sf net.lo /etc/init.d/net.xenbr0

but DO NOT add the interfaces to startup via 'rc-update'. Instead, allow 'xend' to start the bridge interfaces via the following.

Move the original XEN networking script out of harms way:

 sudo mv /etc/xen/scripts/network-bridge /etc/xen/scripts/network-bridge.orig

and put this script in it's place

Code: /etc/xen/scripts/network-bridge


for xenbr in $( echo ${rc_expr} )
	if [ "${xenbr}" == "${rc_expr}" ]
		echo "xen bridge: no bridge configuration found"
		exit 0
		${xenbr} $@

or grab a copy as follows:

 sudo wget -O /etc/xen/scripts/network-bridge,v1.1
 sudo chmod 755 /etc/xen/scripts/network-bridge

Reboot and interfaces should come up as expected. One difference to note, compared to the original 'network-bridge' script is that this does not rename the interfaces and so 'eth0' continues to be the physical interface.


Virtual Machine Manager

The host summary window lists all running virtual machines and current resource utilization.
The host summary window lists all running virtual machines and current resource utilization.

Red Hat hosts a Virtual Machine Manager (in portage: app-emulation/virt-manager) to monitor and manage virtual machines. Note that some features of this desktop application might require Fedora-specific kernel-patches.

Related Articles

Where to get help

Other Related Links

Hosted Gentoo Xen domUs


Retrieved from ""

Last modified: Mon, 29 Sep 2008 23:37:00 +0000 Hits: 187,219