Gentoo Wiki


This article is part of the HOWTO series.
Installation Kernel & Hardware Networks Portage Software System X Server Gaming Non-x86 Emulators Misc


This guide describes migrating an existing setup to software RAID. Please read and make sure you understand HOWTO Install on Software RAID before proceeding. The guide assumes some prior knowledge (Modifying the partition table, configuring and installing a kernel).

Handling live data can be dangerous:

  2. Read the man page and understand each command before executing it.
    • In particular, verify the rsync flags. Many people edit the rsync flags on this page and the original author has long ago given up restoring the flags.
  3. Make sure to use a recent enough kernel and raid tools (mdadm > 2.5).


Linux Kernel Configuration: RAID settings
Device Drivers  --->
  Multi-device support (RAID and LVM)  --->
    [*] Multiple devices driver support (RAID and LVM)
    <*>   RAID support
      < >     Linear (append) mode (NEW)
      <*>     RAID-0 (striping) mode
      <*>     RAID-1 (mirroring) mode
      <*>     RAID-10 (mirrored striping) mode (EXPERIMENTAL)
      <*>     RAID-4/RAID-5/RAID-6 mode
      [*]       Support adding drives to a raid-5 array (NEW)

If you wish to use genkernel, copy a kernel config file, with the above settings enabled, to /etc/kernels/kernel-config-x86-`uname -r`

  1. emerge -av device-mapper
  2. genkernel all

Technical background

RAID elements have a superblock to hold RAID information. This is used to:

The super block is placed at the end of a RAID element. Therefore, with mirrored RAID, slightly less space is available for data. Because of this, you can not simply enable RAID with a RAID superblock without data loss on top of an existing partition.

Migrating data without extra drives

The following sections assume you have data on an existing disk drive and some new disk drives. At the end of this procedure, all the drives will be used in the RAID setup, and no extra drives will be needed to move the data. The filesystem type can be changed during this process.


Note: If the system is very active, it should be placed in single user mode (telinit 1 from a running system, or boot with the single kernel parameter) or booted from LiveCD.
Note: The guide will assume the current disk with data is /dev/sda and the new disk is /dev/sdb. Repeate the same instructions for all partitions.

Create partitions on new device

Copy the existing partition schema (if both disks are the same size):

sfdisk -d /dev/sda | sfdisk /dev/sdb

Or create a new one with fdisk.

After either operation, run fdisk /dev/sdb and set all partition to type fd.

Warning: Do not forget to set all RAID partition types (except type 85, Linux extended) to hex value fd (Linux raid auto). You will receive no warning message from the RAID tools or the kernel if you fail to do so, and the raid will not get reassembled after rebooting.
Warning: Do not set partition type to fd after you have already set up the raid. It is better to recreate the raid and copy the data again.

Setup & Activate new RAID partitions

Note: In the following commands, replace /dev/md0 with the first free raid device (could be other than /dev/md0 if the data is currently on a different raid), and /dev/sdb1 with the correct partition
# mdadm -C /dev/md0 -l 1 -n 2 missing /dev/sdb1
# echo "ARRAY /dev/md0 UUID=deadbeef:deadbeef:deadbeef:deadbeef" >> /etc/mdadm.conf
# mkreiserfs /dev/md0

Put data on new array

# mkdir /mnt/raid-md0
# mount /dev/md0 /mnt/raid-md0
# rsync -avHhx --progress / /mnt/raid-md0

(--delete flag tells rsync to delete files from the destination which do not exist on the source):

# rsync -avHhx --progress --delete / /mnt/raid-md0
# cd /mnt/raid-md0/dev/ && MAKEDEV generic

Finalize RAID setup

You should now be running with a degraded RAID on /dev/sdb1:

# df
# cat /proc/mdstat
Warning: The following step destroys data on the old drive. Make sure the new system is behaving properly.
# sfdisk -d /dev/sdb | sfdisk /dev/sda
# mdadm /dev/md0 -a /dev/sda1

You can watch the RAID rebuild with:

# watch -n1 'cat /proc/mdstat'
Warning: Do not reboot or power off the computer until the RAID has finished rebuilding. At best it will start from the beginning next boot. At worst some data loss may occur.

Now you may wish to jump across to HOWTO_Install_on_Software_RAID and setup


Same as the migration process to RAID1. Modify the create RAID command to use RAID-5 and to have additional partitions.

# mdadm -C /dev/md0 -l 5 -n 3 missing /dev/sdb1 /dev/sdc1

Expanding a RAID1 to more than 2 drives

Example scenario: If you wish you wish to migrate your root partition to RAID5 and use grub as the bootloader, please note that it does not support booting from RAID5. the boot partition must remain at most on RAID1. However, RAID1 can span more than 2 disks. This is reasonable for the boot partition, since it is seldom read from (on boot) or written to (kernel upgrade).

Assuming /dev/sda1 and /dev/sdb1 are mirrod on /dev/md0, add /dev/sdc1:

# mdadm /dev/md0 -a /dev/sdc1
# mdadm -G /dev/md0 -n 3
# watch -n .5 cat /proc/mdstat

Adding a drive to RAID

These instructions are for ending up with RAID-5.

Method 1 : Reshape the RAID

Software RAID can be reshape the live RAID. See the man page for instructions. Many file systems have resizing tools that can then be used to grow them.

Method 2 : Migrate as similar to above

If you are going from 2 to 3 drives then you can do something similar to the following:

# mdadm /dev/md0 -f /dev/sdb1 -r /dev/sdb1
# cd /dev
# mdadm -C /dev/md1 -l 5 -n 3 missing /dev/sdb1 /dev/sdc1
# mdadm -S /dev/md0
# mdadm /dev/md1 -a /dev/sda1
# watch -n .1 cat /proc/mdstat
Retrieved from ""

Last modified: Sun, 22 Jun 2008 08:22:00 +0000 Hits: 31,807