Search:  
Gentoo Wiki

HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID


Please format this article according to the guidelines and Wikification suggestions, then remove this notice {{Wikify}} from the article


Contents

Intro

This howto describes little additions to normal Gentoo x86 handbook installation documentation in order to setup Gentoo for server systems with software RAID mirror (RAID1) and LVM2 on top of RAID1 for easy partition managment. This HowTo deals with two IDE harddisks participating in RAID1 and they are located as /dev/hda, /dev/hdg. Your harddisks setup may differ of course.

Example server I used for this HowTo was 1U with Intel S845WD-E motherboard which has Promise Fasttrak RAID controller integrated. As with 2.6 kernels there is no ataraid, pdcraid in main kernel tree anymore, it's better to go ahead with standard softraid for 2.6 kernel. As this server also had cdrom and I wanted it to be also available as boot device and as it's not possible to boot harddisks behind Promise IDE controller I had to choose the following layout - HDD1 /dev/hda, CDROM /dev/hdb, HDD2 /dev/hdg (secondary Promise IDE interface).

Note: You must tune Read Ahead settings for LVM volumes (man blockdev), in this case the performance will not drop or drop just about 1%

This page should be updated as it does not reflect the preferred means of achieving the desired result. LVM can natively mirror drives and partitions. By placing LVM on top of a RAID mirror set, the only achieved result is additional overhead and negative performance impact for zero net gain. Simply use LVM to create a mirror set and be done with it. The end result is better performance (less CPU overhead) and superior volume management.

I pulled the first link I found from Google to further document VLM mirror sets. http://www.redhat.com/docs/manuals/csgfs/browse/4.6/Cluster_Logical_Volume_Manager/mirrorrecover.html

Initial setup

modprobe raid1

There are two ways of partitioning

fdisk

First we'll use fdisk command to perform partitions.

fdisk /dev/hda
fdisk /dev/hdg


I chose to have a following partition setup:

primary part1 -> boot 100MB
primary part2 -> swap 512MB
primary part3 -> root 1GB
primary part4 -> LVM2 for /usr /var /opt /tmp /home XGB

Alternate config:

primary part1 -> boot 100MB
primary part2 -> LVM2 for swap, / and whatever other filesystems like /var /home as you wish.

To set partition type to Linux raid autodetect, in fdisk press t and then type fd as the partition hex code.

Code: fdisk
# so the partition schema looks like that:
Device Boot         Start         End      Blocks   Id  System
/dev/hda1               1          13      104391   fd  Linux raid autodetect
/dev/hda2              14          76      506047+  fd  Linux raid autodetect
/dev/hda3              77         259     1469947+  fd  Linux raid autodetect
/dev/hda4             260       19457   154207935   fd  Linux raid autodetect
Device Boot         Start         End      Blocks   Id  System
/dev/hdg1               1          13      104391   fd  Linux raid autodetect
/dev/hdg2              14          76      506047+  fd  Linux raid autodetect
/dev/hdg3              77         259     1469947+  fd  Linux raid autodetect
/dev/hdg4             260       19457   154207935   fd  Linux raid autodetect
Code: sfdisk
sfdisk -d /dev/hda | sfdisk /dev/hdb

Having to align the partition boundaries with the cylinders was a DOS legacy issue, and was not something that would cause a problem for Linux. You can override this by giving the --Linux option, that stands for 'do not complain about things irrelevant for Linux'

Code: sfdisk
sfdisk -d /dev/hda | sfdisk -L /dev/hdb

cfdisk

By using "cfdisk", it is more easy way to partition over physical drives.

cfdisk /dev/hda
cfdisk /dev/hdg


there are many options available, select from menu "type" or by pressing "t" and set the partition type to "md" as it is for raid partition. Don't forget to make "boot" your boot partition from the option menu. Both drives will have same partition table of same size.


Linux has to have at least 1 non-raid partition to boot from, this can be a RAID1 array since RAID1 makes an exact copy of the disk. With RAID1 Disk1 and Disk2 will be exactly the same and therefore the disks will be separately mountable like normal. The reason why linux can't be booted from software Raid0 arrays is simple, the kernel has the drivers that are needed to read the raid array so the boot loader will not be able to access the kernel. It's the chicken or the egg problem. A RAID0 array or any RAID array that utilizes striping cannot be mounted without the raid software since all data spans multiple drives, without the software it's unreadable. This is not an issue with RAID1 (Mirrored) arrays since the disks themselves contain all the information for every partition and can be addressed individually just like any single partitioned disk. Because of this feature and to make sure in the event of a failure we can get back up and running quickly we will create a RAID1 array for /boot to allow us to have a backup copy of our kernels and other important files.

Note: If you only have a simple RAID Controller like a Via on board or a simple PCI RAID Controller don't try to make your RAID on the controller. If you do, you may receive an error message like "Device or Resource busy" when you try to do the mdadm --create command.
mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/hda1 /dev/hdg1
mdadm --create --verbose /dev/md2 --level=1 --raid-devices=2 /dev/hda2 /dev/hdg2
mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/hda3 /dev/hdg3
mdadm --create --verbose /dev/md4 --level=1 --raid-devices=2 /dev/hda4 /dev/hdg4
Note: The above commands create a RAID1 setup for the swap space (/dev/md1). This is recommended for maximum reliability. If you prefer perfomance, there is no reason to RAID the swap space at all (RAID0 or otherwise). The kernel will stripe the swap space if you specify multiple swap partitions with equal priority. See TLDP page on swapping on RAID for explanations of both of these options.
Note: If "mdadm --create" do not create /dev/md* then you need to create the nodes manually before starting raid
mknod /dev/md1 b 9 1
mknod /dev/md2 b 9 2
mknod /dev/md3 b 9 3
mknod /dev/md4 b 9 4

If you intend to create your software raid mirror using only one drive the below command will be helpful. The missing keyword allows you to create an array with only one drive. If you create your array with --raid-devices=1, you will be able to add another drive, but it will only be a spare. This of course is useless in a mirror.

mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/hda1 missing
# execute "watch -n1 'cat /proc/mdstat'" to see its status
livecd root # watch -n1 'cat /proc/mdstat'
Personalities : [raid1]
md4 : active raid1 hdg4[1] hda4[0]
     154207808 blocks [2/2] [UU]
     [>....................]  resync =  4.0% (6240576/154207808) finish=45.5min speed=54148K/sec
md3 : active raid1 hdg3[1] hda3[0]
     1469824 blocks [2/2] [UU]
md2 : active raid1 hdg2[1] hda2[0]
     505920 blocks [2/2] [UU]
md1 : active raid1 hdg1[1] hda1[0]
     104320 blocks [2/2] [UU]
unused devices: <none>
Note: If you get poor resynch performance. Adjust the values in; /proc/sys/dev/raid/speed_limit_max and/or /proc/sys/dev/raid/speed_limit_min
File: /etc/mdadm.conf
# paste this inside
DEVICE          /dev/hda*
DEVICE          /dev/hdg*
ARRAY           /dev/md1 devices=/dev/hda1,/dev/hdg1
ARRAY           /dev/md2 devices=/dev/hda2,/dev/hdg2
ARRAY           /dev/md3 devices=/dev/hda3,/dev/hdg3
ARRAY           /dev/md4 devices=/dev/hda4,/dev/hdg4
MAILADDR        root@localhost

OR

mdadm --detail --scan > /etc/mdadm.conf

If you specify a MAILADDR the Gentoo init scripts will start mdmadm in monitor mode, and an email will be sent when any raid failures are detected.

Filesystems setup

mke2fs -j /dev/md1
mkswap /dev/md2
swapon /dev/md2
mkreiserfs /dev/md3
modprobe dm-mod

Also, we don't want to scan media like CD drives and such...

mkdir -p /etc/lvm
echo 'filter=["a|dev/md4|", "r/.*/"]' >/etc/lvm/lvm.conf
vgscan
# it doesn't find anything...that's ok
pvcreate /dev/md4
pvdisplay

If the PV Name listed is not /dev/md4 but is instead one of /dev/hda4 or /dev/hdg4, you probably mis-typed the devices filter above.

NOTE: This didn't work for me, "pvdisplay /dev/md4" worked. --DR 06-09-21

vgcreate vg /dev/md4
vgdisplay vg

Again, if the device filter line above is not correct, you'll get "Found Duplicate PV" errors, and the VG will not be created

# Ignore errors "/etc/lvm/backup: fsync failed: Invalid argument" from now on
# You can extend partition later or make bigger initially - as you wish
lvcreate -L10G -nusr  vg
lvcreate -L5G  -nhome vg
lvcreate -L5G  -nopt  vg
lvcreate -L10G -nvar  vg
lvcreate -L2G  -ntmp  vg
mkreiserfs /dev/vg/usr
mkreiserfs /dev/vg/home
mkreiserfs /dev/vg/opt
mkreiserfs /dev/vg/var
mkreiserfs /dev/vg/tmp
for i in usr home opt var tmp; do mkreiserfs /dev/vg/$i; done;

Mounting

Creating needed mount points and mounting the filesystems:

mount /dev/md3 /mnt/gentoo/
cd /mnt/gentoo
mkdir boot usr home opt var tmp
chmod 1777 /mnt/gentoo/tmp
#if you have /var/tmp on a seperate partition run the above command on that, too
mount /dev/md1 boot
for i in `ls /dev/vg`; do mount /dev/vg/$i $i; done;

Stage 3 setup

cd /mnt/gentoo
tar xvjpf stage?-*.tar.bz2

Portage snapshot

tar xjf portage*

Chrooting

cp -L /etc/resolv.conf /mnt/gentoo/etc/resolv.conf
cp -L /etc/mdadm.conf /mnt/gentoo/etc/mdadm.conf
mount -t proc none /mnt/gentoo/proc
mount -o bind /dev /mnt/gentoo/dev
chroot /mnt/gentoo /bin/bash
env-update
source /etc/profile
export PS1="(chroot) $PS1"

System configuration

Follow Gentoo Handbook instructions and try to remember the following:

emerge -va sys-fs/lvm2 sys-fs/mdadm sys-fs/reiserfsprogs

(edit by DoubleHP 11th Feb 2007: sys-fs/raidtools are now deprecated; see bug #165917 )

Here are some system configuration files:

File: /etc/fstab
/dev/md1                /boot           ext3            noauto,noatime          1 2
/dev/md3                /               reiserfs        noatime                 0 0
/dev/md2                none            swap            sw                      0 0
/dev/vg/usr             /usr            reiserfs        noatime                 0 0
/dev/vg/var             /var            reiserfs        noatime                 0 0
/dev/vg/opt             /opt            reiserfs        noatime                 0 0
/dev/vg/tmp             /tmp            reiserfs        noatime                 0 0
/dev/vg/home            /home           reiserfs        noatime                 0 0
/dev/cdroms/cdrom0      /mnt/cdrom      iso9660         noauto,ro               0 0
none                    /proc           proc            defaults                0 0
none                    /dev/shm        tmpfs           defaults                0 0

As mentioned above, there is no need to use raid0 on swap partitions. Create both partitions as swap and put them in your fstab like this: "/dev/hda2 none swap sw,pri=0" Copy that line and use /dev/hdg2 for the other swap device. Like in this page. --DA

However, loss of a swap partition will cost you the system... Performance vs. stability. Your choice of course. --AK

NB! Fix /etc/lvm/lvm.conf for fast bootup (otherwise it will mess up with device scanning on boot):

File: /etc/lvm/lvm.conf
#fill it with following
 devices {
    scan = [ "/dev/" ]
    filter = [ "a|/dev/md/|", "r|/dev/.*/|" ]
 }

Bootloader installation and configuration

emerge -va sys-boot/grub
grub --no-floppy
#setup MBR on /dev/hda
root (hd0,0)
setup (hd0)
#setup MBR on /dev/hdg
device (hd0) /dev/hdg
root (hd0,0)
setup (hd0)

NB! If one of the harddisks fails your motherboard bios has to be setup to boot from other harddisk as well.

 default 0
 timeout 8
 #Nice new 2.6 kernel
 title=Gentoo 2.6.16
 root (hd0,0)
 kernel /kernel-2.6.16-gentoo-r7 root=/dev/md3 md=3,/dev/hda3,/dev/hdg3


NB! If you use genkernel with initrd or initramfs. Kernel will not automatically setup raid devices and force all raid and lvm setup duty to initrd or initramfs. If you using genkernel you need too add some extra kernel options (dolvm2 and lvmraid=/dev/xxx,...) and make sure you comple kernel by use genkernel with --lvm2 option. If you also use LABEL instead of device name, then you must add --disklabel option too genkernel also.

NB2! If you using genkernel with a SCSI controller and boot drives you may need to add 'doscsi' to the kernel append parameters.

 emerge -va gentoo-sources
 emerge -va sys-kernel/genkernel
 genkernel --install --disklabel --lvm2 --dmraid all
 default 0
 timeout 8
 title Gentoo Linux 2006.0 [Default] genkernel-x86_64-2.6.16-gentoo-r7
 root (hd0,0)
 kernel /kernel-genkernel-x86_64-2.6.16-gentoo-r7 root=/dev/rd/0 init=/linuxrc real_root=LABEL=gt_root dolvm2 lvmraid=/dev/md1,/dev/md2,/dev/md3,/dev/md4
 initrd /initramfs-genkernel-x86_64-2.6.16-gentoo-r7

Edit by DoubleHP 11th Feb 2007: when using Raid 0 or 1 or 0+1 without LVM, you should not use root=LABEL= since those raid configuration will allow the kernel to read the indicated LABEL on some partition, and thus is likely to mount a partition instead of the raid chain. In some case, you may also try to omit 'dolvm2' and lvmraid=, and try to rely on the autoprobe function of the kernel, what will make your configuration more tolerant to hardware changes (disk swapping for example).

These are the kernel options I use for my md and lvm setup. --Bunkacid 07:50, 23 September 2008 (UTC)

part=/dev/sda2 lvmraid=/dev/md2 domdadm dolvm dodmraid root=/dev/ram0 init=/linuxrc real_root=/dev/mapper/system-slash udev video=vesafb:1152x768-32@75,gtf

Note: With latest genkernel, you should pass domdadm option to kernel. In case if busybox can't find /etc/mdadm.conf, it will try to autodetect arrays.

Rebooting the system after install

umount /mnt/gentoo/var
umount /mnt/gentoo/usr
umount /mnt/gentoo/home
umount /mnt/gentoo/tmp
umount /mnt/gentoo/opt
umount /mnt/gentoo/proc
umount /mnt/gentoo/boot
umount /mnt/gentoo
vgchange -an
reboot

Short Version

umount /mnt/gentoo/*

Ignore the errors

umount /mnt/gentoo
vgchange -an
reboot

--mainframe 08:29, 26 Nov 2004 (GMT)

Troubleshooting

See Recovering RAID and LVM

Retrieved from "http://www.gentoo-wiki.info/HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID"

Last modified: Mon, 22 Sep 2008 21:50:00 +0000 Hits: 94,339