Search:  
Gentoo Wiki

Installing_on_LVM2_And_Raid5


This article is part of the HOWTO series.
Installation Kernel & Hardware Networks Portage Software System X Server Gaming Non-x86 Emulators Misc

Contents

Intro

I just thought I'd share my success with you all in the hope it can at least help one. "mainframe" already did a very good documentation on LVM2 and RAID1.

Follow the Gentoo Handbook for AMD64. I will be doing a stage1 installation.

My aim was to have the root (/) and boot (/boot) on RAID1, the swap and LVM2 on RAID5. swap on RAID5 might not be good for performance but it does save your box if a disk goes down... Thoughts?

Comment: If you want a system that will not crash if one of your disks dies, put swap on raid5 (as above). If you are more concerned with performance, do not use mdadm or lvm for swap. Instead, let the kernel manage this and set the priority for each of your swap partitions to the same value, this will make a faster swap, but you may have kernel panic if a drive dies. --Daniel Santos 05:57, 30 December 2007 (UTC)

If you are thinking of putting RAID5 on your system, you should know that you need a minimum of 3 disks to do so. Know also that the /boot filesystem can only be mirrored.

I'll try and use the Gentoo Handbook titles so you know when changes take place

There are many ways to do the same thing. This is just one of them. Feel free to correct me if I'm wrong or better this document if you feel up to it.

--ecosta 11:45, 9 Aug 2005 (GMT)

System components

If you need details on my Hardware and BIOS setup go to Asus A8N-SLI

Booting

Code: Load RAID and LVM2 modules
 
# modprobe raid1
# modprobe raid5
# modprobe dm-mod
  

You will have to have partitions of Identical sizes for any filesystem you'll want to mirror. Same disks would be a plus. All mirrored partitions will have to be set to the "Linux raid autodetect" partition (fd). I used a lot of "small" partitions for a easier management of LVM later on.

File: Partition setup
 
sd[abcd]1     /boot
sd[abcd]2     /
sd[abcd]3     swap
sd[abcd]5-13  LVM2 (/usr,/home,/var,/tmp,/opt)
  
Code: Disk partition
 
# fdisk -l /dev/sdd

Disk /dev/sdd: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1          13      104391   fd  Linux raid autodetect
/dev/sdd2              14          75      498015   fd  Linux raid autodetect
/dev/sdd3              76         137      498015   fd  Linux raid autodetect
/dev/sdd4             138       30401   243095580    5  Extended
/dev/sdd5             138        3785    29302528+  fd  Linux raid autodetect
/dev/sdd6            3786        7433    29302528+  fd  Linux raid autodetect
/dev/sdd7            7434       11081    29302528+  fd  Linux raid autodetect
/dev/sdd8           11082       14729    29302528+  fd  Linux raid autodetect
/dev/sdd9           14730       18377    29302528+  fd  Linux raid autodetect
/dev/sdd10          18378       22025    29302528+  fd  Linux raid autodetect
/dev/sdd11          22026       25673    29302528+  fd  Linux raid autodetect
/dev/sdd12          25674       29321    29302528+  fd  Linux raid autodetect
/dev/sdd13          29322       30401     8675068+  fd  Linux raid autodetect
  

Creating the RAID

Make the required devices for the RAIDs

Newer versions of mdadm does this for you, so if you are using a recent (2006 or later) version of mdadm, you should skip this step.

Code: Make RAID devices
 
# cd /dev
# mkdir /dev/md
# for i in `seq 0 11`; do mknod /dev/md/$i b 9 $i; ln -s md/$i md$i; done
  

Build the RAIDs

Remember, I setup the boot and root file system on RAID1 and the rest on RAID5. These are the commands I used to do so. If you have 4 disks you could set up a spare disk rather than use it in the RAID5. That's up to you. I just need the disk space.

I also setup the RAID1 over four disks rather than two. I figured I might as well not let them go to waste. Any thoughts on that?

Also added in a bitmap flag - this allows the RAID array to store its book keeping in a special file. This can dramatically speed up rebuild times. Because this setting has unpredictable results on non ext2/ext3 file systems - I've chosen to set it as internal -this will replicate to all drives.

Code: Build all RAID devices
 
# mdadm --create --verbose /dev/md0 --level=1 --bitmap=internal --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
# mdadm --create --verbose /dev/md1 --level=1 --bitmap=internal --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
# mdadm --create --verbose /dev/md2 --level=5 --bitmap=internal --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
# mdadm --create --verbose /dev/md3 --level=5 --bitmap=internal --raid-devices=4 /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5
...
# mdadm --create --verbose /dev/md11 --level=5 --bitmap=internal --raid-devices=4 /dev/sda13 /dev/sdb13 /dev/sdc13 /dev/sdd13
  

You may want to just have two disks for RAID1 array and use the others as spares. This just reduces the amount of useless work those disks have to do. The chances of both disks failing in the time it takes a spare to sync are infinitesimal:

Code: RAID1 with Spares
 
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 --spare-devices=2 /dev/sdc1 /dev/sdd1
# mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2 --spare-devices=2 /dev/sdc2 /dev/sdd2
  

Note: If you are attempting to create an array with no spares and mdadm insists on giving you a spare and telling you that one of your disks is missing, add "--spare-devices=0 -f" to force it to use zero spares.

Generate config file

Now we need to setup "/etc/mdadm.conf" with run the following command to populate your file. You might want to check the output before adding it to the file.

Code: Edit /etc/mdadm.conf
 
# mdadm --detail --scan >> /etc/mdadm.conf

# tail -n 25 /etc/mdadm.conf

ARRAY /dev/md1 level=raid1 num-devices=4 UUID=06a93599:485b2b29:25d86b0d:2a17d4f7
   devices=/dev/sda2,/dev/sdb2,/dev/sdc2,/dev/sdd2
ARRAY /dev/md2 level=raid5 num-devices=4 UUID=e612fc65:1f373274:9c00c8e4:f5ffc107
   devices=/dev/sda3,/dev/sdb3,/dev/sdc3,/dev/sdd3
ARRAY /dev/md3 level=raid5 num-devices=4 UUID=6ba39969:9340023c:00a1673a:64c38673
   devices=/dev/sda5,/dev/sdb5,/dev/sdc5,/dev/sdd5
ARRAY /dev/md4 level=raid5 num-devices=4 UUID=a18bb931:531e3c2a:13caef26:c473b587
   devices=/dev/sda6,/dev/sdb6,/dev/sdc6,/dev/sdd6
ARRAY /dev/md5 level=raid5 num-devices=4 UUID=f29d9bad:f7adb0bb:a6d5c9f4:f8fab77c
   devices=/dev/sda7,/dev/sdb7,/dev/sdc7,/dev/sdd7
ARRAY /dev/md6 level=raid5 num-devices=4 UUID=7129a9d8:cca1aa24:3c06dd57:fbfd0209
   devices=/dev/sda8,/dev/sdb8,/dev/sdc8,/dev/sdd8
ARRAY /dev/md7 level=raid5 num-devices=4 UUID=54dbec7e:d334c4c2:8ddf2268:9667d2ea
   devices=/dev/sda9,/dev/sdb9,/dev/sdc9,/dev/sdd9
ARRAY /dev/md8 level=raid5 num-devices=4 UUID=1ee9675b:4d23617f:70e0cd48:e2e2c0b1
   devices=/dev/sda10,/dev/sdb10,/dev/sdc10,/dev/sdd10
ARRAY /dev/md9 level=raid5 num-devices=4 UUID=6ef56e4c:6989326b:941666a2:f05da515
   devices=/dev/sda11,/dev/sdb11,/dev/sdc11,/dev/sdd11
ARRAY /dev/md10 level=raid5 num-devices=4 UUID=569519e6:b8fbb794:9ff9bcd4:f3b7ce27
   devices=/dev/sda12,/dev/sdb12,/dev/sdc12,/dev/sdd12
ARRAY /dev/md11 level=raid5 num-devices=4 UUID=9b4fa852:5ff122a1:d9038cb7:5e9fd01d
   devices=/dev/sda13,/dev/sdb13,/dev/sdc13,/dev/sdd13
ARRAY /dev/md0 level=raid1 num-devices=4 UUID=a6305e82:39cfa368:63a0848d:a9be4c38
   devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
  

Now you can check the status of your RAIDs with a couple of nice commands. Look for "[UUU_]". A "U" will mean the disk is running, a "_" will mean the disk is down. Wait until all RAIDs are built before continuing.

Code: Check the RAIDs status
 
# cat /proc/mdstat         # This command will show you the health of your entire RAID
# mdadm --detail /dev/md0  # This command will give you detailed info on a specific RAID
  


Remember that as of now, all references to disks will be of the form "md" and not "sd[abcd]" anymore!

Code: Activate swap
 
# mkswap /dev/md2
# swapon /dev/md2
  
Code: Activate boot and root Filesystems
 
# mke2fs /dev/md0     # ext2 is recomended for the boot file system
# mke2fs -j /dev/md1  # ext3 will do fine here
  

Creating LVM2

This manual is not here to explain LVM and I recomend you read about it first if you don't know anything about it.


Edit the LVM2 config file

Let's get started with LVM2. First of all, edit "/etc/lvm/lvm.conf" to put a filter on the RAID only. Remove any line that starts with the word "filter" and add this one instead.


File: Edit /etc/lvm/lvm.conf
 
filter = [ "a|/dev/md/*|", "r/.*/" ]
  

Initialise LVM2

Code: Activate LVM2
 
# vgscan         # Scan for existing volume groups (will result in nothing found)
# vgchange -a y  # Activate Volume Groups (VG)
  

Setup LVM2

We now add a Physical Volume (PV) to LVM, then we assign it to the Volume Group (VG) "VG00". Then we create Logical Volumes (LV) for each file system we will want to use.

Code: Creating LVM2 LVs0
 
# pvcreate /dev/md3
# vgcreate vg00 /dev/md3
# lvcreate -L 6G -n /dev/vg00/lv_usr
# lvcreate -L 10G -n /dev/vg00/lv_home
# lvcreate -L 4G -n /dev/vg00/lv_var
# lvcreate -L 500M -n /dev/vg00/lv_tmp
# lvcreate -L 2G -n /dev/vg00/lv_opt
  

Creating Filesystems

I want all filesystems as "ext3" so I will simply type

Code: Create Filesystems
 
# for i in `find /dev/vg00 -type l`; do mke2fs -j $i; done
  

Mounting

Make sure you mount all file systems to the right place!

Code: Create Filesystems
 
# mount -t ext3 /dev/md1 /mnt/gentoo/

# mkdir /mnt/gentoo/{boot,usr,home,var,tmp,opt}

# mount -t ext2 /dev/md0 /mnt/gentoo/boot
# mount -t ext3 /dev/vg00/lv_usr /mnt/gentoo/usr
# mount -t ext3 /dev/vg00/lv_home /mnt/gentoo/home
# mount -t ext3 /dev/vg00/lv_var /mnt/gentoo/var
# mount -t ext3 /dev/vg00/lv_tmp /mnt/gentoo/tmp
# mount -t ext3 /dev/vg00/lv_opt /mnt/gentoo/opt

# chmod 1777 /mnt/gentoo/tmp
  

make file

Just in case this can be of help. This is the content of my make file for an AMD64 +3000

File: Edit /mnt/gentoo/etc/make.conf
 
CFLAGS="-march=k8 -O2 -pipe"
CHOST="x86_64-pc-linux-gnu"
CXXFLAGS="${CFLAGS}"
USE="amd64 multilib -gnome -ipv6 kde qt X"

GENTOO_MIRRORS="http://ftp.belnet.be/mirror/rsync.gentoo.org/gentoo/ ftp://ftp.easynet.nl/mirror/gentoo/"
SYNC="rsync://rsync.be.gentoo.org/gentoo-portage"
  

change the mirror to a country near you ;)

Before you chroot

Copy the config files to the "to be" linux system

Code: Copy RAID and LVM2 config files
 
# cp -L /etc/mdadm.conf /mnt/gentoo/etc
# cp -Lr /etc/lvm /mnt/gentoo/etc
  

After you chroot

Make the RAID devices or you might have a nasty surprise when you reboot.

Code: Create Device files
 
# cd /dev
# mkdir /dev/md
# for i in `seq 0 11`; do mknod /dev/md/$i b 9 $i; ln -s md/$i md$i; done
  

distcc

Now might be a good time to setup distcc if you have another AMD64 box to help compile it all. If you do, read this.

Grub

This is the point where we write to the MBR and write the boot config file.

device.map

It is very important you do this. Especialy if you have an IDE CDROM. If you don't do this, grub might scan the IDE device first and lockup on the CDROM.

File: Edit /boot/grub/device.map
 
(hd0) /dev/sda
(hd1) /dev/sdb
(hd2) /dev/sdc
(hd3) /dev/sdd
  

grub.conf

Edit this file and remember to use your RAID device, not the disk

File: Edit /boot/grub/grub.conf
 
timeout 5
default 0
fallback 1
splashimage=(hd0,3)/grub/splash.xpm.gz


title  Gentoo Linux 2.6.12-r6 (29/07/2005) - 1st boot (mirror sda3)
root (hd0,0)
kernel /boot/kernel-2.6.12-gentoo-r6-050729 root=/dev/md0
  

Grub on the MBR

I'm making my four disks bootable as I have them all mirrored. Stop when you are happy

Code: Grub on the MBR
 
# grub --no-floppy --device-map=/boot/grub/device.map

grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)

grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)

grub> device (hd0) /dev/sdc
grub> root (hd0,0)
grub> setup (hd0)

grub> device (hd0) /dev/sdd
grub> root (hd0,0)
grub> setup (hd0)

grub> quit
  

Reboot

That's about it. Exit the chroot, Unmount the file systems and reboot!

Code: Prepare for reboot
 
# umount /mnt/gentoo/{usr,home,var,tmp,opt,boot,proc}
# umount /mnt/gentoo
# vgchange -an
# reboot
  
Retrieved from "http://www.gentoo-wiki.info/Installing_on_LVM2_And_Raid5"

Last modified: Fri, 05 Sep 2008 08:33:00 +0000 Hits: 13,039