Search:  
Gentoo Wiki

Resize_LVM2_on_RAID5


This article is part of the HOWTO series.
Installation Kernel & Hardware Networks Portage Software System X Server Gaming Non-x86 Emulators Misc

Contents

Introduction:

This HOWTO describes the expansion of RAID5 as well as an LVM2 meta device on top of the RAID.

The document is a translation from the german version on http://de.www.gentoo-wiki.info/LVM2_und_RAID5_erweitern

LVM2 advantages

I do use LVM on fileserver as well as client workstations. It combines easy filesystem expansion with easy maintenance. There are some performance reductions I took for that.

The advantages show up, if you want to expand your partition "just so". Or if you like to add a missing partition without looking for holes in the partition table.

Even using the same hard disk gets easier with LVM and windows partitions. Because one can shift some space from linux partitions to windows without moving the linux partitions.

different approach


IMPORTANT

Data backup

Did you back up your data? Please do so before continuing. A RAID array does not prevent you from backing up your data. rm -rf /somepath/<SPACE> * or mke2fs instead of e2fsck happens too easy. Have a look at http://www.tldp.org/HOWTO/Software-RAID-HOWTO-10.html#ss10.2

The author of the HOWTO is not responsible for lost data, damaged hardware or sleepless nights. Using the HOWTO is on your own responsibility. All examples in this HOWTO are just this - "examples". They provide only a hint how it could look like.

incapable kernel versions

Greg Nicholson tells about a kernel release candidate 2.6.23-rc3 which seems to cause problems. I haven't found a bad report of 2.6.23 final release.

Requirements

You need RAID and DM support in the kernel, as well as support for RAID5 expansion. The examples are from linux kernel 2.6.18 .

/usr/src/linux/.config

#
# Multi-device support (RAID and LVM)
#
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
..
CONFIG_MD_RAID456=m
CONFIG_MD_RAID5_RESHAPE=y
..
CONFIG_BLK_DEV_DM=m


make menuconfig:

Linux Kernel Configuration: Devices
Device Drivers  --->
 Multi-device support (RAID and LVM)  --->
  [*] Multiple devices driver support (RAID and LVM)
  <M>   RAID support
  <M>     RAID-4/RAID-5/RAID-6 mode
  [*]       Support adding drives to a raid-5 array (experimental)
  <M>   Device mapper support


The RAID5 array needs to be ready and in production state. Have a look here how to set up an array


Expand the array

before

The first step is to create the physical partitions on the new drive that will be part of the RAID array. This is how hdd is partitioned:

mycomputer ~ # fdisk -l /dev/hdd

Disk /dev/hdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hdd1               1       60801   488384001   fd  Linux raid autodetect

Be sure to change the partition's System Id to fd (Linux raid autodect)

cat /proc/mdstat, fdisk -l and pvdisplay show size and status of the RAID device.

mycomputer ~ # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hda1[0] hdc1[2] hdb1[1]
      487424 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
mycomputer ~ # fdisk -l /dev/md0

Disk /dev/md0: 499.1 MB, 499122176 bytes
2 heads, 4 sectors/track, 121856 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table
mycomputer ~ # pvdisplay
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               raidvol
  PV Size               476 MB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              119
  Free PE               3
  Allocated PE          116
  PV UUID               DVYSgN-poHd-9Xxb-gQ86-z5lo-ecCD-uQwLTq

When expanding a RAID5 array every block in the array will need to be read and written back to a new location. If there are any bad sectors on your hard drives this will fail and the array will go into degraded mode. Rather than finding this out during the expansion you may check for this beforehand by performing a data scrub:

echo check >> /sys/block/md0/md/sync_action

Monitor the progress of the check with

watch -n .1 cat /proc/mdstat

Either it will complete successfully and you can continue to the expansion or it will fail and you'll need to replace the failed drive before continuing.

expanding

The new disc needs to be added to the array

mdadm /dev/md0 --add /dev/hdd1

It is configured as a new drive in the array.

mycomputer ~ # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hda1[0] hdd1[3](S) hdc1[2] hdb1[1]
      487424 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

In this configuration it serves as a spare drive to sync against, if one of the other discs drop from the array.

Now we can expand the array. During the process all stripes are rewritten among all discs. It takes longer than just a resychronisation of the array against one disc, and may take several minutes up to some hours.

mdadm --grow /dev/md0 --raid-disks=4

Now stop! You have to wait for the synchronisation to finish before you can use the additional space. You can watch the progress if you cat /proc/mdstat. There is an additional line giving an estimation when the sync is done.

This proces can take a REALLY long time, depending on the number of devices and the amount of data on them. 10 hours or more is not unheard of. To speed up the process a bit you can use

echo 25000 > /proc/sys/dev/raid/speed_limit_max

With kernel 2.6.24 you can use:

echo 25000 > /proc/sys/dev/raid/speed_limit_min

If the --grow command is done, it looks like this

mycomputer ~ # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hda1[0] hdd1[3] hdc1[2] hdb1[1]
      731136 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

Once the rebuild has completed you can tell LVM that your physical device has grown.

pvresize /dev/md0
  Physical volume "/dev/md0" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

Now all the space of your RAID partition is available to your logical volumes. You add new partitions or resize existing ones. See also http://www.tldp.org/HOWTO/LVM-HOWTO/extendlv.html

umount /dev/raidvol/mypart
e2fsck -t -f /dev/raidvol/mypart
lvresize -L +200M /dev/raidvol/mypart
  Extending logical volume samba to 612.00 MB
  Logical volume mypart successfully resized

resize2fs gets an additional command to set the stride to 16 because of the chunk size of 64k (see above "cat /proc/mdstat")

resize2fs -S 16 -p /dev/raidvol/mypart

Setting the stride is only neccessary if you did not give a stride parameter in the last format (mke2fs) command. By setting the stride you can get more thoughput if you access the ext2/ext3 partition on your RAID5 array. If you did give a hint to mke2fs, there is no need to set this option again. "resize2fs" gets the parameter by heuristic analysis. (Have a look at http://www.tldp.org/HOWTO/Software-RAID-HOWTO-5.html#ss5.11)

mount /dev/raidvol/mypart
df -m /mynewpartition

Before the next reboot you have to update /etc/mdadm.conf. Depending on your config settings you have to update your DEVICE section to include your new disk like this

DEVICE /dev/hd*[a-h][0-9] /dev/sd*[a-h][0-9]

or update the ARRAY section (num-devices=n+1) like this

ARRAY /dev/md0 level=raid5 num-devices=4

afterwards

mycomputer ~ # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hda1[0] hdd1[3] hdc1[2] hdb1[1]
      731136 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
mycomputer ~ # fdisk -l /dev/md0

Disk /dev/md0: 714 MB, 748683264 bytes
2 heads, 4 sectors/track, 182784 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table
mycomputer ~ # pvdisplay /dev/md0
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               raidvol
  PV Size               714 MB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              178
  Free PE               62
  Allocated PE          116
  PV UUID               DVYSgN-poHd-9Xxb-gQ86-z5lo-ecCD-uQwLTq

If anything goes wrong

Neil Brown has put some interesting help, if your grow command failed. He told that

mdadm -C /dev/md0 -l5 -n4 -c256 --assume-clean /dev/sdf1 /dev/sde1 \
   /dev/sdd1 /dev/sdc1

could reset your array to the previous state with 4 disks instead of 5. Greg Nicholson pointed out, that the disk names have to be in alphabethical order though. Note that your specific configuration has to be adopted to the command (surprisingly..)

Retrieved from "http://www.gentoo-wiki.info/Resize_LVM2_on_RAID5"

Last modified: Fri, 05 Sep 2008 08:32:00 +0000 Hits: 13,781