Search:  
Gentoo Wiki

EVMS

This article is part of the HOWTO series.
Installation Kernel & Hardware Networks Portage Software System X Server Gaming Non-x86 Emulators Misc
Wikipedia has an article on:
Enterprise Volume Management System

Contents

Introduction

This page is an updated version of the earlier EVMS setup for Gentoo 2005.1. I have updated the page to reflect my experiences on setting it up Gentoo 2006.1. I have tried to write the Howto as intuitive as possible in order for it to be of use in as many different setups as possible. The reason behind updating this Howto was that the 2005.1 version was inadequate and no longer provided enough information to get a functional system up and running. The 2005.1 version is good for a more detailed explanation of containers, regions, etc.

It is not required for you to be familiar with the Gentoo install process, since this Howto is meant to be used with the fairly easy Gentoo installation without GTK+ or Dialog installers. I haven't verified if installation can be completed with using either of the installers, and I recommend sticking with manual installation. During the install the latest 2.6.19-gentoo-r5 kernel was emerged and compiled, and I strongly recommend you to do it too as there might be a possibility of some issues with older kernels.

I recommend visiting EVMS homepage prior to installing and also finding out what RAID is.

This walkthrough will assume a simple hardware environment: single home server with 4 hdds (1*80Gb, 3*320Gb), without a RAID controller. The combinations of EVMS and RAID partitioning schemes for such system are almost endless, but for sake of clarity I will explain through setting up:

I've preserved the 2005.1 EVMS setup scheme and relocated it to the end of this Howto as an example of another partitioning scheme with a single server and two identical hdds:

Both setups use all native EVMS partitions, but the 2005.1 will not use LVM. Both I and the former author discourage you from setting up Bad Block Relocation as there might be reliability problems when using BBR.

Getting Started

Boot up the 2006.1 livecd. As stated above, my disks are /dev/sda,sdb,sdc and sdd which will be the names I will refer to them during the walkthrough. Your disks might be sd* or hd*, keep that in mind while typing disk related commands.

When the system is loaded, switch to console and activate EVMS with evms_activate, then launch evmsn. evmsn is an ncurses based disk manager which is the central point and the only tool which You will be managing your disks with in an EVMS system. I don't recommend using the evms command line interface as the ncurses interface is much more intuitive.

You will be presented with a blue screen with following kind of information:

 Actions   Settings                                                        Help
 0=Logical Volumes

 Name                          Size Modified Active R/O  Plug-in   Mountpoint
 ──────────────────────────────────────────────────────────────────────────────
 /dev/evms/loop0           700.1 MB             X                  /mnt/livecd

2007.0-r1 Note (This may apply to other install CDs): If the drive you wish to work with does not have any partitions defined on it (let's say sde), "EVMS assumes that this disk is a compatibility volume known as /dev/evms/sde." [1] On the "0=Logical Volumes" screen, you must delete the compatibility volume for the drive first (highlight the compatibility volume > press [ENTER], select 'Delete...' and work your way through the prompts) then add a DOS Segment Manager (Press A-A-S and work your way through the prompts).

There are several screens presenting different levels of partitioning structures. You can switch between them with Tab. Currently it's showing only the livecd mounted under EVMS. Press tab until You see the screen "5=Disk Segments". Here all your disks are represented with emtpy partition space and master boot records.

The segment level is the lowest of EVMS hierarchy. You create disk segments which you can then put inside containers or add to regions. The levels stack up and on the top there is an EVMS Volume. Each level you put your disk segments into provides it some type of extra functionality: data redundancy, combining volumes into seamlessly bigger segments, taking snapshots of volumes and so on. I will be using Dos Segment Manager in all of the created segments.

During evmsn usage, You might see the following error appear:

 device-mapper: dm-linear: Device lookup failed
 device-mapper: error adding target to table

It should be nothing to worry about at this stage. I got the error every time i tried to look at the details of some partition, but the partitions and volumes were still correctly created.

We will start by creating a boot partition. Press A-C-S (Actions,Create,Segment) and select sda_freespace1. I created a boot partition of 64MB as it will definately be enough and a bigger partition would be sheer waste of space. Accept the defaults (partition id 0x83 and type Linux). Remember to set the bootable flag to yes. After this select A-C-E (EVMS Volume) and wrap the segment directly inside an EVMS Volume. Label the volume "boot". Now you can see in screen 0 beside loop device the /dev/evms/boot. Now you need to make a file system for the boot volume (A-F-M), select Ext2/3. Do the same for sdd by creating a swap segment. Remember to switch fs type to Linux Swap. Wrap also the swap to /dev/evms/swap and format it as swapfs.

The reason I haven't RAID mirrored swap or boot is that I feel it's unnecessary for my system. You *should* be able to complete this Howto and get a booting system even if You decided to make a RAID or LVM region/container of your boot and swap partitions, but as I have no experience of performing either, I will not guarantee that they will work.

--Markd 15:25, 29 May 2007 (UTC) Quoted from http://evms.sourceforge.net/install/boot.html:

NOTE: There are some limitations on the type of volume that can be used to hold your /boot filesystem, regardless of which boot-loader you're using. This volume must be created from a simple disk segment/partition, or from a raid-1 region on top of simple segments. Using a volume created from LVM regions to hold /boot is not supported at this time. The volume itself can be either an EVMS or a compatibility volume.

Furthermore, I found out, that grub does not like the boot partition to be on a bad block relocation enabled partition.

Next, we will be making a root segment of the remaining space on sda. Wrap the root segment into an LVM2 container (A-C-C) so that it becomes lvm2/root in screen 4, and after that put it into an LVM2 region (A-C-R) so that we are able to put it under an EVMS Volume. I used Ext3 as my root fs, but You can format Your partitions whatever You wish as long as the fs support is built-in in Your kernel.

For /home, i created a RAID 1 region where I put sdb and sdc. After this all RAID regions will show up in screen 3 under md/mdX. I wrapped the md0 region under LVM2 region and put it inside EVMS2 Volume. Ext3 was also used for /home fs. The last remaining space on hdd I intent to use for streaming video storage, so I partitioned it the same way as /root with the exception of using JFS as fs, though XFS might be more efficent with large video files.

When you've finished, you should have something like this showing up in screen 0:

 Actions   Settings                                                        Help
 0=Logical Volumes

 Name                          Size Modified Active R/O  Plug-in   Mountpoint
 ──────────────────────────────────────────────────────────────────────────────
 /dev/evms/boot             62.7 MB    X                 Ext2/3
 /dev/evms/home            298.1 GB    X                 Ext2/3
 /dev/evms/other           296.1 GB    X                 JFS
 /dev/evms/root             74.5 GB    X                 Ext2/3
 /dev/evms/swap              2.0 GB    X                 SWAPFS
 /dev/evms/loop0           700.1 MB             X                  /mnt/livecd

Quit (A-Q) and save the changes. Creating the filesystems will take a while, and there should be a notice of "raid array is not clean -- starting background reconstruction". The RAID reconstruction took about a hour on my system, which is just enough to set up other parts of the system. You can check the state of RAID with cat /proc/mdstat.

CHROOTing and Installing System

Partitions are now created under EVMS, so you can mount them:

 swapon /dev/evms/swap
 mount /dev/evms/root /mnt/gentoo
 mkdir /mnt/gentoo/boot
 mount -t ext3 /dev/evms/boot /mnt/gentoo/boot

After this You can continue installation. Keep this Howto open though, You will need to return here after doing menuconfig on your kernel. Remember to compile all needed filesystems, the stuff mentioned in EVMS Kernel FAQ, and stuff like COMPAT_VDSO. EVMS is quite picky with the Kernel, requiring you to be careful choosing all flags and drivers essential for the system.

The reason I mounted all my partitions under EVMS was avoiding having to use hacks like the BD-Claim patch. EVMS documentation states, that "If none of the kernel's built-in partitions are mounted, then there won't be any conflicts when DM tries to claim the disks.".

I compiled the kernel manually because genkernel seemed to be unable to produce a working kernel even though i used genkernel --kernel-config=myownconfig --evms2 all. I recommend building manually and wgetting the evms-2.5.5 (or newer) initrd.

Remember to modify your /etc/fstab to reflect the new mountpoints of the system:

 /dev/evms/boot  /boot   ext2     noatime 1 2
 /dev/evms/swap  none    swap     sw      0 0
 /dev/evms/root  /       ext3     noatime 0 1
 /dev/evms/home	/home	ext3     noatime 0 2
 /dev/evms/other /other  jfs      noatime 0 2
 

Grub settings

After copying evms-2.5.5.-initrd and myownkernel-2.6.19 into /boot, i edited grub.conf to have the following settings:

 default 0

 title 2.6.19
 kernel /myownkernel-2.6.19 root=/dev/evms/root ramdisk=8192 udev vga=0
 initrd /evms-2.5.5.-initrd

ramdisk=8192 was needed because evms initrd was too big to be loaded with the normal 4096 limit (you can change this limit in your kernels config before compiling). udev is simply because i'm using udev, you might not want to.

--HeathPetersen 17:27, 5 June 2007 (UTC) I prefer to have genkernel build my initramfs for me. It eliminates potential compatibility problems associated with using the evms initramfs. Here's how I do it:

 # mount /boot
 # genkernel --kerneldir=/usr/src/linux-<my kernel version> --evms2 initrd
 # cd /boot
 # ln -s initramfs-genkernel-x86-<my kernel version> initramfs
 # cat /boot/grub/grub.conf
 default 0

 title Default
 kernel /boot/kernel root=/dev/ram0 init=/linuxrc ramdisk=8192 real_root=/dev/evms/root nodetect doload=dm-mod,raid1 doevms2
 initrd /boot/initramfs
 #

I could not get the genkernel kernel to boot, not even with the:

kernel /kernel-gentoo-x86-2.6.12-r10 root=/dev/ram0 real_root=/dev/evms/sda1 init=/linuxrc udev doevms2

parameters, and as I can't see any benefit from using genkernel. If you get a working genkernel to boot without the EVMS initrd, please update this part.


== -- On 12 Nov 2007 genkernel is ok. You should not use doevms2. Use doevms simple and everything will be fine. -- ==

Grub could not install itself with grub-install, so i tried from grub command line:

 root (hd0,0)
 setup (hd0)

The install command failed, so I booted and chrooted back to the environment and ran install again. This is a strange feature reported by many people; grub install fails on first try but after booting it succeeds.

--HeathPetersen 08:47, 5 June 2007 (UTC) If getting an error 22 from grub when typing setup (hd0), try this:

 device (hd0) /dev/evms/.nodes/sda
 root (hd0,0)
 setup (hd0)

When you boot first time, evms complains about LVM2 headers, but if you've created LVM2 containers or regions under EVMS volumes, you should ignore the error and set device_size_prompt = no in the LVM2 plugin section of /etc/evms.conf, because the error only matters if you did not create all partitions with evmsn.

I noticed that after EVMS initrd had started evmsn, my kernel tried to boot it up again, resulting in errors, which seemed to be caused by evms being the first in RC_VOLUME_ORDER (/etc/conf.d/rc). I switched the order back to "raid evms lvm dm" and the problem disappeared. If I had not used EVMS initrd, the order should be "evms raid lvm dm" because otherwise /dev/evms would be empty due to EVMS not being activated early enough.

EVMS for Gentoo 2005.1

The EVMS User Guide is extremely detailed about the syntax of each command. What I found difficult (and had to figure out by trial and error) was what sequence of EVMS objects would accomplish what I wanted.

This picture really says everything. You can pretty much ignore the rest of the walkthrough if you understand it:

EVMS object relationships

2005.1 Getting Started

Boot 2005.1, using either the CD or network. I used this boot line, but yours may quite possibly differ: gentoo-nofb

My disks are called /dev/sda and /dev/sdb; yours may be the same or /dev/hda and /dev/hdb. Throughout this walkthrough I'll be using sda and sdb.

Note that the minimal CD does not include grub or grub-install. If you plan on using grub as your bootloader, it is much easier to download the full 700 MB livecd.

2005.1 Starting EVMS

EVMS is started via the evms_activate program. The normal gentoo startup scripts run this from two places. There is a hard coded reference in /lib/addons/udev_start.sh, and a soft reference via the /etc/conf.d/rc files RC_VOLUME_ORDER variable. Basically, you have to make sure that evms appears in RC_VOLUME_ORDER (it is there by default). This will cause evms to be activated at boot time.

2005.1 Start the EVMS ncurses interface

You could use the command line interface instead, but I'm going to demonstrate using the ncurses interface. Its much easier for beginners to figure out what is going on with the sort-of-visual UI. At the end of each section I'll give the equivalent evms commands.

Start the EVMS ncurses interface:

evmsn

I'm not going to spend much time talking through how to drive the interface when the EVMS users guide has all the dirt. But you need to understand that there are several screens that show the current evms status (switch screens using TAB) and that you navigate the menu system using letters, arrow keys, and Enter.

After you start evmsn you should see a screen like this:

 Actions   Settings                                                        Help
 0=Logical Volumes

 Name                          Size Modified Active R/O  Plug-in   Mountpoint
 ──────────────────────────────────────────────────────────────────────────────
 /dev/evms/loop0            41.1 MB             X                  /mnt/livecd

This is showing the livecd loopback filesystem that was used to boot the system.

Tab to the next screen; you should see your two disks:

 Actions   Settings                                                        Help
 5=Disk Segments

 Name                     Size Modified Active R/O  Type      Offset    Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 sda_mbr                 31 KB                      Meta Data 0         DosSegM
 sda_freespace1        74.5 GB                      Free Spac 63        DosSegM
 sdb_mbr                 31 KB                      Meta Data 0         DosSegM
 sdb_freespace1        74.5 GB                      Free Spac 63        DosSegM

If you are not seeing sda_mbr and sda_freespace1 (and the same for sdb) then something has gone terribly wrong; you should fix it before proceeding. You may need to write an empty dos partition table using fdisk or cfdisk.

2005.1 Create the boot partitions

Grub and LILO are pretty stupid; they know nothing about EVMS, nor the kernel implementation of the device mapper (which does the actual run-time magic).

We therefore need to create a boot partition with the minimum necessary magic for Grub (which I'll be using because the instructions are much simpler; for Lilo see EVMS boot loader page).

The basic strategy is that we will be creating a normal dos partition at the start of each disk to hold the boot partition. These partitions will be mirrored to each other, but the underlying raw partitions will be usable by Grub.

Because Grub is not EVMS aware, we will not be using any of the more advanced EVMS features like bad block mapping or growable regions for the boot partition.

2005.1 Create the raw dos partitions

First create a 512 MB dos partition on sda:

 Actions » Create » Segment
 (DOS Segment Manager should already be selected)
 Next
 select sda_freespace1 - cursor to it and hit space
 Next
 Change Size to 512mb
 Toggle Bootable to yes
 Select Create at the bottom and hit enter
 OK

Repeat the commands above to create a 512 mb boot partition on sdb also.

After you have done this you should see the following disk segments (remember to use Tab to switch between screens if you are not seeing screen 5=Disk Segments at the top):

 Actions   Settings                                                        Help
 5=Disk Segments

 Name                     Size Modified Active R/O  Type      Offset    Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 sda_mbr                 31 KB    X                 Meta Data 0         DosSegM
 sda1                 509.8 MB    X                 Data      63        DosSegM
 sda_freespace1        74.0 GB                      Free Spac 1044225   DosSegM
 sdb_mbr                 31 KB    X                 Meta Data 0         DosSegM
 sdb1                 509.8 MB    X                 Data      63        DosSegM
 sdb_freespace1        74.0 GB                      Free Spac 1044225   DosSegM

And there is a new screen that wasn't available before: screen 1=Available Objects. Tab to it and you should see something like this:

 Actions   Settings                                                        Help
 1=Available Objects

 Name                     Size Modified Active R/O  Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 sda1                 509.8 MB    X                 DosSegMgr
 sdb1                 509.8 MB    X                 DosSegMgr

2005.1 Create a RAID-1 Mirror

Now link those two raw partitions together to a single raid mirror:

 Actions » Create » Region
 Select MD Raid 1 Region Manager
 Next
 Select both sda1 and sdb1
 Next
 Create
 OK

Screen 1 (Available Objects) should now show the newly created md0 region:

 Actions   Settings                                                        Help
 1=Available Objects

 Name                     Size Modified Active R/O  Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 md/md0               509.8 MB    X                 MDRaid1RegMgr

2005.1 Create an EVMS volume

You now need to take the raid region and turn it into an EVMS volume:

 Actions » Create » EVMS Volume
 Name the region "boot"
 Create

Screen 0 (Logical Volumes) should show the newly created volume:

 Actions   Settings                                                        Help
 0=Logical Volumes

 Name                          Size Modified Active R/O  Plug-in   Mountpoint
 ──────────────────────────────────────────────────────────────────────────────
 /dev/evms/boot            509.7 MB    X
 /dev/evms/loop0            41.1 MB             X                  /mnt/livecd

2005.1 Make a filesystem

Finally you have to make a filesystem. I'm going to use ext3:

 Actions » File System » Make
 select ext2/3
 Next
 select /dev/evms/boot (if it is not already selected)
 Next
 (optional:) give it a volume label of "boot"
 Make Filesystem

Screen 0 (Logical Volumes) will now show /dev/evms/boot as having an Ext2/3 filesystem on it:

 Actions   Settings                                                        Help
 0=Logical Volumes

 Name                          Size Modified Active R/O  Plug-in   Mountpoint
 ──────────────────────────────────────────────────────────────────────────────
 /dev/evms/boot            509.7 MB    X                 Ext2/3
 /dev/evms/loop0            41.1 MB             X                  /mnt/livecd

You've now successfully completed the boot partition! Its now time to try something more challenging.

[Comment from Comcy: When booting up from /boot under RAID1, (don't forget to activate Grub on BOTH drives), Grub will boot to EITHER without the need of having two entries (for each mirrored drive) in grub.conf.]

2005.1 Quick Command Line Version

Here is the identical series of steps outlined above using the evms command line program:

 Create: Segment,sda_freespace1,size=512MB
 Create: Segment,sdb_freespace1,size=512MB
 Create: Region,MDRaid1RegMgr={},sda1,sdb1
 Create: Volume, "md/md0", Name="boot"
 Mkfs: Ext2/3={vollabel=boot}, /dev/evms/boot
 Save
 Quit

2005.1 Swap space

Repeat the above steps making a swap partition instead of a boot partition. I created my swap partition using RAID 0 instead of RAID 1; to do this select the RAID 0 region manager instead of the RAID 1 region manager.

[Comment from JaredThirsk: Note that the linux kernel automatically stripes swap partitions as long as are mounted at the same priority, so some people believe it is best not to use RAID0 for the swap partitions and let the kernel handle striping. Another approach is to use RAID1 mirroring to ensure swap memory is still valid if a disk goes down.]

[Comment from Comcy: When configuring RAID5 across a RAID0 swap (as in four identical drives, where /boot is a RAID1 on sda1 and sdb1 and swap is a RAID0 on sdc1 and sdd1, and the remaining space is under RAID5) pulling either sdc or sdd will cause 'strangeness', for example, menus not working in X, but the mouse continues to work, etc. Normal behavior resumes a few seconds after the drive is returned to service. As this defeats the benefit of having a RAID5, RAID 1 for swap would be the logical choice.]

I'm not going to include all the screen captures; its really pretty much the same sequence of steps

2005.1 Quick Command Line Version

Here are the steps to create a RAID 0 set of swap partitions:

 Create: Segment,sda_freespace1,size=2GB
 Create: Segment,sdb_freespace1,size=2GB
 Create: Region,MDRaid0RegMgr={},sda2,sdb2
 Create: Volume, "md/md1", Name="swap"
 Mkfs: SWAPFS={}, /dev/evms/swap
 Save
 Quit

2005.1 LVM Containers

The previous example allowed you to have mirrored boot partitions, but didn't really add much extra functionality. This section will show how to use one of the most useful EVMS features: containers.

Basically a container separates the physical disk from logical usage. It allows you to grow (or shrink) a filesystem as needed by dividing the disk into 32 MB chunks; you can then map these chunks at will into particular volumes. Corry and Dobbelstein have a nice overview of this; go to slide 12. They call it Volume Groups; the EVMS interface call it Containers.

2005.1 Create a dos partition for the rest of the disk

First create the raw dos partitions:

 Actions » Create » Segment
 select DOS Segment Manager
 Next
 select sda_freespace1
 Next
 Create
 OK

Repeat that for sdb_freespace1

You should now see two available objects (you may need to hit tab a few times to get to the Available Objects screen):

 Actions   Settings                                                        Help
 1=Available Objects

 Name                     Size Modified Active R/O  Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 sda3                  72.0 GB    X                 DosSegMgr
 sdb3                  72.0 GB    X                 DosSegMgr

And these disk segments. Sda3 and sdb3 are the ones that should have just been created:

 Actions   Settings                                                        Help
 5=Disk Segments

 Name                     Size Modified Active R/O  Type      Offset    Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 sda_mbr                 31 KB    X                 Meta Data 0         DosSegM
 sda1                 509.8 MB             X        Data      63        DosSegM
 sda2                   2.0 GB             X        Data      1044225   DosSegM
 sda3                  72.0 GB    X                 Data      5237190   DosSegM
 sda_freespace1         905 KB                      Free Spac 156248190 DosSegM
 sdb_mbr                 31 KB    X                 Meta Data 0         DosSegM
 sdb1                 509.8 MB             X        Data      63        DosSegM
 sdb2                   2.0 GB             X        Data      1044225   DosSegM
 sdb3                  72.0 GB    X                 Data      5237190   DosSegM
 sdb_freespace1         905 KB                      Free Spac 156248190 DosSegM

2005.1 Bad Block Relocation

I used to show Bad Block Relocation, but the 2.6.12-r6 kernel included on the 2005.1 disk doesn't work with BBR so I've left it out of this walkthrough.

2005.1 Mirror the two segments together

Create a RAID 1 region of the two segments you just created:

 Actions » Create » Region
 select MD Raid 1 Region Manager
 Next
 select both sda3 and sdb3
 Next
 Create
 OK

You should now see md/md2 on the Available Objects screen:

 Actions   Settings                                                        Help
 1=Available Objects

 Name                     Size Modified Active R/O  Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 md/md2                72.0 GB    X                 MDRaid1RegMgr

2005.1 Create an LVM2 container

Now create the container:

 Actions » Create » Container
 select LVM2 Region Manager
 Next
 select md/md2
 Next
 name the container main
 Create
 OK
 Actions   Settings                                                        Help
 3=Storage Regions

 Name                     Size Modified Active R/O Corrupt  Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 lvm2/main/Freespace   72.0 GB                              LVM2
 md/md1                 4.0 GB             X                MDRaid0RegMgr
 md/md0               509.8 MB             X                MDRaid1RegMgr
 md/md2                72.0 GB    X                         MDRaid1RegMgr

2005.1 Create Storage Regions

Once you have a container you need to create the actual storage regions. We will be creating one region: for the root. You could obviously make more, or you could keep your free space available.

Unlike non-LVM filesystems you don't have to worry overmuch about how big to make each partition; you can always grow or shrink a filesystem as long as it is allocated from an LVM container.

I'm using a root size of 16 gb; feel free to adjust this value to suit your needs.

 Actions » Create » Region
 select LVM2 Region Manager
 Next
 select lvm2/main/Freespace
 Next
 name the region root
 make the size 16 GB
 Create
 OK

If things went well you should see this available object:

 Actions   Settings                                                        Help
 1=Available Objects

 Name                     Size Modified Active R/O  Plug-in
 ──────────────────────────────────────────────────────────────────────────────
 lvm2/main/root        16.0 GB    X                 LVM2

2005.1 EVMS Volumes

You now have to package this region up into an EVMS Volume:

 Actions » Create » EVMS Volume
 select md/md2
 volume name: root
 Create

This new volume should show up on the Logical Volumes Screen:

 Actions   Settings                                                        Help
 0=Logical Volumes

 Name                          Size Modified Active R/O  Plug-in   Mountpoint
 ──────────────────────────────────────────────────────────────────────────────
 /dev/evms/boot            509.7 MB    X                 Ext2/3
 /dev/evms/loop0            41.1 MB             X                  /mnt/livecd
 /dev/evms/root             16.0 GB    X
 /dev/evms/swap              4.0 GB    X

2005.1 Make Filesystems

Finally, create a filesystem on the root volume:

 Actions » File System » Make
 select ReiserFS System Interface Module
 Next
 select /dev/evms/root
 Next
 (optional:) change volume label to "root"
 Make Filesystem

Save and Quit out of evmsn

Or, more exactly, Quit and Save. evmsn asks if you want to save when quitting, so select Quit and then Save:

 Actions » Quit
 Save

You should see some kernel messages about "raid array is not clean -- starting background reconstruction". This is a good thing and means that things were set up correctly.

2005.1 Quick Command Line Version

Here are the steps to create a RAID 1 root of 16GB in an LVM2 container:

 Create: Segment,sda_freespace1,
 Create: Segment,sdb_freespace1,
 Create: Region,MDRaid1RegMgr={},sda3,sdb3
 Create: Container,LVM2={name="main"},md/md2
 Create: Region, LVM2={name="root", size=16gb},lvm2/main/Freespace
 Create: Volume, "lvm2/main/root", Name="root"
 Mkfs: ReiserFS={vollabel=root}, /dev/evms/root
 Save
 Quit

2005.1 Continue with the install

You can now continue on with your install. EVMS has created your partitions for you, so you can pick up at the swapon command in the Preparing the Disks page.

 swapon /dev/evms/swap
 mount /dev/evms/root /mnt/gentoo
 mkdir /mnt/gentoo/boot
 mount -t ext3 /dev/evms/boot /mnt/gentoo/boot


2005.1 Setting up boot loader

Concerning booting with EVMS we have two upcoming issues. If you are using a current kernel from 2.6 series (this is default on 2005.1), then your will run into the following trap:

The linux kernel locks the whole disk, from which the root file system is mounted. That implies, that if your are booting from for instance /dev/sda2 then evms cannot install it's naming scheme and only /dev/sda2 /dev/vg00/root_fs are possible. It is not possible to use evms in this case.

It is neccessary to use an initrd, which issues the command evms_activate PRIOR to mounting ANY root file system -> This makes it possible to use evms normally. The root partition would be called then /dev/evms/sda2.

The cheapest way to set this up under Gentoo is:

2) emerge evms and compile a recent kernel with genkernel. The USE="-gtk static" flag will prevent the installation of dependencies graphical configuration tool evmsgui (should be installed later), and static means statical compilation of EVMS tools for initrd (NOTE: "static" use flag is no longer required for evms-2.5.5):

 USE="-gtk static" emerge evms
 emerge genkernel
 genkernel --menuconfig --evms2 all
 

This compiles a complete kernel plus modules, necessary for evms usage AND prepares a standardized gentoo initramfs, which can issue the famous evms_activate.

3) edit your boot loader to have the following kernel parameters (this is for grub and gentoo-sources 2.6.12-gentoo-r10):

 title Gentoo 2005.1 booting with evms support
 kernel /kernel-gentoo-x86-2.6.12-r10 root=/dev/ram0 real_root=/dev/evms/sdb2 init=/linuxrc udev doevms2
 initrd /initramfs-gentoo-x86-2.6.12-r10
 

ATTENTION: Most important is the parameter "doevms2", which triggers evms_activate.

remark from DR: I had to add ramdisk=8192 to "kernel" to make it work

4) If you need to install grub to the MBR of each disk, run the following from outside the chroot using a livecd. Repeat for each drive on your system, and substitute the appropriate device node:

$ grub
> root (hd0,0)
> setup (hd0)

5) enjoy your running system ;-)

Take care, that /etc/fstab uses completely the evms device naming scheme, i.e.: /dev/evms/md0, /dev/evms/lvm2/vg00/root, etc....

Questions to this procedure please send to cchrist at mcpsoftworks dot com

remark from Steven: I had troubles booting my system when doing a fsck on boot. When skipping the check on boot (set the last number for the mounts in /etc/fstab to 0) my system booted normal. I only have to find out what the exact consequences are, and how I should check my partitions from now on... (some info I found : evms.sourceforge.net, look for section 'System Startup')

NOTE for step 4: Supposing sda1/sdb2 are in RAID1, "volumed" as /dev/evms/boot and you want grub installed on both MBRs then before chrooting as described in the Gentoo handbook, you should:

$ mount -o bind /dev /mnt/gentoo/dev

Then proceed with this guide and when you get to step 4 do:

$ grub
> device (hd0) /dev/evms/.nodes/sda
> root (hd0,0)
> setup (hd0)
> device (hd0) /dev/evms/.nodes/sdb
> root (hd0,0)
> setup (hd0)
> quit

2005.1 Setting up fstab

Setting up fstab is pretty normal, but the names of the mount points need to be the EVMS mount points:

 /dev/evms/boot
 /dev/evms/swap
 /dev/evms/root

2005.1 Building your kernel

First of all: when in doubt you could check the EVMS kernel build page, especially if you are running a 2.4 kernel.

Linux Kernel Configuration: Enable EVMS kernel features
Code maturity level options  --->
  Prompt for development and/or incomplete code/drivers
Device Drivers --->
  Block devices  --->
    Loopback device support
    RAM disk support
      Initial RAM disk (initrd) support (NEW)
  Multi-device support (RAID and LVM)  --->
    Multiple devices driver support (RAID and LVM)
      RAID support
        Linear (append) mode
        RAID-0 (striping) mode
        RAID-1 (mirroring) mode
        RAID-10 (mirrored striping) mode (EXPERIMENTAL)
        RAID-4/RAID-5 mode
        RAID-6 mode
        Multipath I/O support
        Faulty test module for MD
      Device mapper support
        Crypt target support
        Snapshot target (EXPERIMENTAL)
        Mirror target (EXPERIMENTAL)
        Zero target (EXPERIMENTAL)
        Multipath target (EXPERIMENTAL)
        Bad Block Relocation Device Target (EXPERIMENTAL)

You only need the loopback device if you are going to modify the EVMS initrd partition.

You only need the RAID and device mapper capabilities that you are actually using on your system.

The EVMS initrd assumes that all RAID, device mapper, and disk drivers needed to boot are compiled into the kernel; the initrd does not load any modules. If this doesn't work for you there are directions on the EVMS web site for modifying the init-ramdisk image (you will need to scroll the page down...)

2005.1 Change checkroot

FIXME: from recent update the function start_volumes can't be call from /etc/init.d/checkroot:

/etc/init.d/checkroot: line 15: start_volumes: command not found

HINT : start_addon evms

HINT AGAIN : The new start_addon evms (old start_volumes) command seems to have been moved to the "initrd init script" I'm searching for a way to edit the files in the genkernel's templates. Hadrien KOHL

By default the gentoo startup files start EVMS in the /etc/init.d/checkfs script. Since in our case the root filesystem is on evms this is too late; evms needs to be started in the /etc/init.d/checkroot script.

Change the top of the start() function in /etc/init.d/checkroot to this:

start() {
       local retval=0

       # Start RAID/LVM/EVMS/DM volumes for /usr, /var, etc.
       # NOTE: this should be done *before* mounting anything
       mount / -n -o remount,rw &>/dev/null
       [[ -z ${CDBOOT} ]] && start_volumes

       if [[ ! -f /fastboot && -z ${CDBOOT} ]] && ! is_net_fs / ; then

The line you need to really add is the one that calls start_volumes; this will eventually call evms_activate.

You will also need to comment out the same line in /etc/init.d/checkfs.

2005.1 Fixing /etc/conf.d/rc by changing RC_VOLUME_ORDER

By default, Gentoo will start dm devices before starting evms, which can cause errors such as "cannot mount /dev/evms/root" during the boot fsck. This error is caused because /dev/evms is empty. To fix this, tell Gentoo to try evms before anything else. Set the following in the rc file:

RC_VOLUME_ORDER="evms raid lvm dm"

Things that should be talked about

There are some topics that this page doesn't cover and should be talked about:


Concerns or Compliments? Please use the Discussion section.


Last modified: Mon, 10 Dec 2007 02:21:00 +0000 Hits: 37,442