Setting Up RAID using mdadm on Existing Drive

After experiencing a hard-disk failure (luckily no important stuff loss, just some backups), I’ve decided to setup a RAID1 array on my existing Ubuntu 12.04 installation. The important thing was to migrate my existing data to the new RAID array while retaining all the data. The easy solution would have been to setup the array on two new drives and then copy my data over. However, I did not have a spare drive (apart from the new one) to copy my data over while creating the RAID array, so I had to take the trickier way.

I mainly followed François Marier’s excellent tutorial. As I went through it I realized I had to adjust a few things either to make it work on Ubuntu 12.04 or because I preferred another way to do stuff.

I’ve check the steps below using Ubuntu 12.04 on both a physical and a virtual machine (albeit in the dumb order – first I risked my data and then decided to prefect the process on a VM :-)). I think the same steps should apply to other Debian derivatives and more recent Ubuntu versions as well.

Outline

Before diving into action, I want to outline the whole process. In the first step we will create a degraded RAID1 array, which means a RAID1 array with one of the drives missing, using only the new drive. Next we will config the system to be able to boot from the new degraded RAID1 array and copy the data from the old drive to the RAID1 array on the new drive. Afterwards, we will reboot the system using the degraded array and add the old drive to the array, thus making it no longer degraded. At this point, we will update again some configurations to make things permanent and finally we will test the setup.

Make sure you got backups of your important stuff before proceeding. Most likely you won’t need them, like I didn’t, but just in case.

Partitioning the Drive

For the rest of the tutorial, I’ll assume the old disk, the one with existing data, is /dev/sda and the new one is /dev/sdb/. I’ll also assume /dev/sda1 is the root partition and /dev/sda2 is the swap partition. If you have more partitions or your layout is different, just make sure you adjust the instructions accordingly.

The first step is to create partitions on the new disk that match the size of the partitions we would like to mirror on the old disk. This can be done using fdisk, parted or using GUI tools such as Ubuntu’s Disk utility or gparted.

If both disks are the same size and you want to mirror all the partitions, the easiest way to do so is to copy the partition table using sfdisk:

# sfdisk -d /dev/sda > partition_table
# sfdisk /dev/sdb < partition_table

This will only work if your partition table is MBR (as sfdisk doesn’t understand GPT). Before running the second command take a look at partition_table to make sure everything seems normal. If your using GPT drives with more than 2TB, see Asif’s comment regarding sgdisk.

You don’t need to bother setting the “raid” flag on your partitions like some people suggest. mdadm will scan all of your partitions regardless of that flag. Likewise, the “boot” flag isn’t needed on any of the partitions.

Creating the RAID Array

If you haven’t installed mdadm so far, do it:

# apt-get install mdadm

We create a degraded RAID1 array with the new drive. Usually a degraded RAID array is a result of malfunction, but we do it intentionally. We do so, because it allows us to have an operational RAID array which we can copy our data into and then add the old drive to the array and sync it.

# mdadm --create root --level=1 --raid-devices=2 missing /dev/sdb1  
# mdadm --create swap --level=1 --raid-devices=2 missing /dev/sdb2

These commands instructs mdadm to create a RAID1 array with two drives where one of the drives is missing. A separate array is created for the root and swap partitions. As you can see, I decided to put have my swap on RAID as well. There are different opinions on the matter. The main advantage is that your system will be able to survive one of the disk failing while the system is running. The disadvantage is that it wastes space. Performance wise, RAID isn’t better as might be expected, as Linux supports stripping (like RAID0) if it has swap partitions on two disks. In my case, I have plenty of RAM available and swap space is mainly unused, so I guessed I’m better of using RAID1 for the swap as well.

You may encounter the following warning when creating the arrays:

mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array?

Grub 1.99, which is the default bootloader in recent Ubuntu distributions supports booting from partitions with the 1.2 format metadata, so it’s safe to type “y” here.

Next, we need to create a filesystems on the newly created RAID arrays:

# mkfs.ext4 /dev/md/root
# mkswap /dev/md/swap

The following will record your newly created MD arrays in mdadm.conf:

# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

Preparing to Boot the Array

In this step we shall prepare the system to boot the newly created boot array. Of course we won’t actully do that before copying our data into it.

Start by editing /etc/grub.d/40_custom and adding a new entry to boot the raid array. The easiest way is to copy the latest boot stanza from /boot/grub/grub.cfg and modify it. The boot stanza looks something like this:

menuentry 'Ubuntu, with Linux 3.2.0-56-generic' --class ubuntu --class gnu-linux --class gnu --class os {
        recordfail
        gfxmode $linux_gfx_mode
        insmod gzio
        insmod part_msdos
        insmod ext2
        set root='(hd0,msdos1)'
        search --no-floppy --fs-uuid --set=root 19939b0e-4272-40e0-846b-8bbe49e4a02c
        linux   /boot/vmlinuz-3.2.0-56-generic root=UUID=19939b0e-4272-40e0-846b-8bbe49e4a02c ro   quiet splash $vt_handoff
        initrd  /boot/initrd.img-3.2.0-56-generic
}

First we need to add

insmod raid
insmod mdraid1x

just after the rest of the insmod lines. This will load the necessary GRUB modules to detect your raid array during the bootprocess. If you decided to go for 0.9 metadata earlier (despite my recommendation…) you will need to load mdraid09 instead of mdraid1x. Next we need to modify the root partition. This is done my modifying the UUID (those random looking hex-and-hyphens strings) arguments to the lines starting with search and linux. To find out the UUID for your root partition run

# blkid /dev/md/root

Which will give something like

/dev/md/root: UUID="49b6f295-2fe3-48bb-bfb5-27171e015497" TYPE="ext4"

The set root line can be removed as the search line overrides it.

Last but not least add bootdegraded=true to the kernel parameters, which will allow you to boot the degraded array without any hassles. The result should look something like this:

menuentry 'Ubuntu, with Linux 3.2.0-56-generic (Raid)' --class ubuntu --class gnu-linux --class gnu --class os {
        recordfail
        gfxmode $linux_gfx_mode
        insmod gzio
        insmod part_msdos
        insmod ext2
    insmod raid
    insmod mdraid1x
        search --no-floppy --fs-uuid --set=root e9a36848-756c-414c-a20f-2053a17aba0f
        linux   /boot/vmlinuz-3.2.0-56-generic root=UUID=e9a36848-756c-414c-a20f-2053a17aba0f ro   quiet splash bootdegraded=true $vt_handoff
        initrd  /boot/initrd.img-3.2.0-56-generic
}

Now run update-grub as root so it actually updates the /boot/grub/grub.cfg file. Afterwards, run

# update-initramfs -u -k all

This will make sure that the updated mdadm.conf is put into the initramfs. If you don’t do so the names of your new RAID arrays will be a mess after reboot.

Copying the Data

Before booting the new (degraded) array, we need to copy our data into it. First mount /dev/md/root somewhere, say /mnt/root, and then copy the old data into it.

# rsync -auxHAX --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* / /mnt/root

Next you need to update /mnt/root/etc/fstab with the UUIDs of the new partition (which you can get using blkid). If you have encrypted swap, you should also update /mnt/root/etc/crypttab.

Last this before the reboot is to re-install the bootloader on both drives:

# grub-install /dev/sda
# grub-install /dev/sdb

Reboot the computer. Hold the “Shift” key while booting to force the Grub menu to appear. Select the new Grub menu-entry you have just added (should be last on the list). After the system finished booting up, verify that you’re indeed running from the RAID device by running mount, which should show a line like this:

/dev/md127 on / type ext4 (rw,errors=remount-ro)

The number after /dev/md doesn’t matter, as long as it’s /dev/md and not /dev/sda or other real disk device.

Completing the RAID Array

If you have made it that far, you have a running system with all your data on a degraded RAID array which consists of your new drive. The next step will be to add the old disk to the RAID array. This will delete any existing data on it. So take a few minutes to make sure that you’re not missing any files (this should be fine as we rsync‘ed the data). Adding the old disk back to the RAID array is done by:

# mdadm /dev/md/root -a /dev/sda1
# mdadm /dev/md/swap -a /dev/sda2

Make sure you are adding the right partitions to the right arrays. These commands instruct mdadm to add the old disk to the new arrays. It might take some time to complete syncing the drives. You can track the progress of building the RAID array using:

$ watch cat /proc/mdstat

When it’s done, it means that your RAID arrays are up and running and are no longer degraded.

Remove the boot stanza we’ve added to /etc/grub.d/40_custom and edit /etc/default/grub to add bootdegraded=true to the GRUB_CMDLINE_LINUX_DEFAULT configuration variable. This will cause your system to boot up even if the RAID array gets degraded, which prevent the bug outlined in Ubuntu Freezes When Booting with Degraded Raid.

Finally update Grub and re-install it:

# update-grub
# grub-install /dev/sda
# grub-install /dev/sdb

We are done! Your RAID array should be up and running.

Testing the Setup

Just getting the RAID array to work is good but not enough. As you probably wanted the RAID array as contingency plan, you probably want to test it to make sure it works as intended.

We make sure that the system is able to work in case on of the drives fails. Shut down the system and disconnect one of the drives, say sda. The system should boot fine due to the RAID array, but cat /proc/mdstat should show one of the drives missing.

To restore normal operation, shutdown the system and reconnect the drive before booting it back up. Now re-add the drive to the RAID arrays.

mdadm /dev/md/root -a /dev/sda1
mdadm /dev/md/swap -a /dev/sda2

Again this might take some time. You can view the progress using watch cat /proc/mdstat.

18 thoughts on “Setting Up RAID using mdadm on Existing Drive”

  1. Hi Guy
    Excellent work. I have followed your how-to and have successfully converted my system from a single-drive to a raid configuration on 12.04. Worked perfectly.
    There are some things that you might want to consider:
    1. In the grub configuration, please correct *bootdegreaded*. It should be bootdegraded, otherwise it won’t boot.
    2. For drives more than 2TB that are using GPT, the user should use sgdisk instead of sfdisk.
    3. When copying data using rsync, after specifying -a, there is no need to use uHAX.

    Very good writeup. I am very thankful.

    Regards

  2. Hi Asif,

    Thanks for your remarks, I’ve also fixed the typo. Regarding the rsync command, the man page says:

    -a, –archive archive mode; equals -rlptgoD (no -H,-A,-X)

    I’m not an rsync expert, but from what I understand the HAX part is still needed…

  3. Hi, thank you for this tutorial, I’m following it and I have one question.
    My drives for the RAID1 are 40GB and 320GB, I want to make a RAID1 of those first 40GB which contains the whole system, nevertheless, on the second drive, I’d like to use the remaining GB’s for extra storage mounted somewhere.
    So, If the new drive is bigger than the old one, the rest of the new drive’s space must be left unpartitioned or can I partition that space and mount it somewhere else, say for example /mnt/personalstuff

    Is there a problem doing the RAID1 this way?

    Thanks in advance.

  4. For RAID1 you need partitions matching the partitions on the old drive. Any extra space can be allocated to a new partition which can be mounted anywhere. The extra partition on the new drive will not be mirrored.

  5. Thank you so much for your answer! That’s nice to hear. Though the remaining space will not be mirrored I wanted only the first 40GBs to be RAIDed. So thanks again! I’ll give it a try.

  6. Excellent tutorial. I used this on a Debian 7.8 server and had great success minus a couple small differences. Issuing the mount command does not show the root disk as /dev/md12X but by UUID. I have a separately mounted home dir however that does show up correctly as /dev/md126, even with it set by UUID in /etc/fstab.

    Only suggestion I would make is to include some of the in-between step commands that need to be issued in code tags as well. I failed to add my root drive to fstab the first time I did this and knew immediately when grub couldn’t find boot device but didn’t recall seeing that explicitly in the tutorial. It was there of course but not in code block like other commands. There’s also no mention of editing the search field in fstab to match that of your root device UUID. The code example shows that if you look but a quick mention of that might help some.

    Also, when you mention the install mdadm maybe include a blurb about the auto-config that happens in dpkg to start ALL arrays at boot. I set mine to none initially out of habit and had to go back to reconfigure once I got to the step of rebooting to test.

  7. One more thing from my experience…when you update-grub you HAVE to grub-install /dev/sdX on at least one drive. I failed to do so at the very end after I waited three hours for my second array to sync and was dropped to a grub rescue> prompt. Had to boot to rescue mode on Debian install disc and reinstall grub.

  8. Thank you for this guide. I made the mistake of shutting the computer down once I pulled the second drive into the array. I used systemrescuecd to boot into the system and do grub install and update. However, when I boot the computer, the uefi firmware can’t find either hard drive. Grub doesn’t even load and there are no errors. It just boots straight to efi editor and there are no hard drives to choose to boot. ANy ideas on how to proceed?

  9. Updating /etc/fstab would be wise.
    We have to say system to mount / from the new raid we have created

  10. Seems like newer versions of ubuntu servers do not need
    insmod raid
    only
    insmod mdraid1x

  11. There’s a bug that affects Ubuntu 14.04 (at least).
    Here’s what needs to be done if you don’t want to hit the “device or resource busy” while booting from the degraded raid device:

    echo “sleep 60” > /etc/initramfs-tools/scripts/init-premount/delay_for_raid_array_to_build_before_mounting
    chmod +x /etc/initramfs-tools/scripts/init-premount/delay_for_raid_array_to_build_before_mounting
    update-initramfs -u

    More details:
    http://ubuntuforums.org/showthread.php?t=2241430&p=13114328
    https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1371985

    You’re welcome.

  12. Warning: when using sgdisk the lower case -d flag is for delete partition and the upper case -D flag is for display alignment. Neither of which is helpful. From the man page the flag that imitates dump most closely is probably -p print information.

  13. Thanks for the outstanding guide! Got me up and running no-problem. Again, thank you.

  14. I have 1TB disk(sdb) with only one partition(/dev/sdb1) spanning entire disk and I have another new 1TB disk(sdc) (same exact manufacturer and type) which I need to form RAID1 with existing sdb. I have created a new single partition (/dev/sdc1) similar to /dev/sdb1.
    But the now the difference is, /dev/sdb1 is an LVM partition with multiple LVs on it. So it would be great if you can help me with how can I clone /dev/sdb1 to /dev/sdc1. I believe since its LVM partition rsync command you mentioned won’t work.
    Should I simply :
    (1) dd if=/dev/sdb1 of=/dev/sdc1 ? –> does this work for LVM? And is there a smarter way?
    (2) Or I should do something mentioned like following links related to moving LVM partitions to new disk :
    http://askubuntu.com/questions/161279/how-do-i-move-my-lvm-250-gb-root-partition-to-a-new-120gb-hard-disk

    And since I have another disk(sda) from which I boot, I don’t have to worry about grub changes you mentioned. Also please let me know if any other steps (apart from grub) change in my case.

    Any help greatly appreciated. Please let me know if any more information is needed.
    Thanks.

  15. I have been trying this on a Debian Squeeze. Grub version is 1.98.
    My idea was to get 2 news disks in the end. I have begun with setting up raid with a spare 4th disk I had laying around, which would then become a source for one of the real new disks I had. I didn´t want to mess with the original HDD, in case I would run into some troubles.
    This is where I ran into a problem with grub.

    (sdX below stands either for sda or sdb)

    grub-install /dev/sdX – would work fine for the first (spare 4th disk) added disk.

    But when I then added one of the 2 new disks I wanted to end with
    grub-install /dev/sdX – would throw and error that no such disks exists.

    Weirdly, if I have used the original HDD with a new one, I was able to grub-install on both disks, but the moment I have pulled the original out and added the second new HDD I was in the same problem, when grub would not install (but on both disks!)

    The problem as it turns out is /boot/grub/device.map which is not updated just by running grub-install.
    You need to add –recheck flag, which writes new device.map:

    grub-install –recheck /dev/sdX – only needs to be run once after adding new disks

    After that you can grub-install on both disks correctly, which means that even if one would die and you would need to restart your PC, when you are adding a new HDD, you would actually be able to boot.

    Btw. when copying partitions in case the HDDs are the same, the UUIDs are copied as well which is probably not a good thing to have. The solution for me was to format the copied partitions, which set new UUIDs.

Leave a Reply

Your email address will not be published. Required fields are marked *