Speed Up Rebuilding Linux Software RAID Arrarys

# cat /proc/mdstat

md0 : active raid5 sdf1[7] sdb1[0] sde1[5] sdg1[4] sdh1[3] sdd1[2] sdc1[1]
1465175424 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
[>………………..] recovery = 1.3% (3331200/244195904) finish=2357.0min speed=1700K/sec

Ouch. Two files are used to control the speed of rebuilding RAID arrays in Linux.

/proc/sys/dev/raid/speed_limit_min
/proc/sys/dev/raid/speed_limit_max

Even though my _max file is set to 200,000K/sec and my system is not doing anything, my RAID 5 rebuild process is hovering around the _min rebuild speed, of 1,000K/sec. With my setup this will take approximately 40 hours to complete, which is too long for me to wait. So, I pushed the _min speed up to 10,000K/sec, which will now take 6 hours to finish, and use slightly more of my system’s idle resources.

root# echo “10000” > /proc/sys/dev/raid/speed_limit_min

Later I set _min to 50,000K/sec, and the rebuild speed topped out at 25,000K/sec.

#cat /proc/mdstat

Personalities : [raid5]
md0 : active raid5 sdf1[7] sdb1[0] sde1[5] sdg1[4] sdh1[3] sdd1[2] sdc1[1]
1465175424 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
[=>……………….] recovery = 5.1% (12661840/244195904) finish=149.9min speed=25726K/sec

The rebuild took less than 3 hours, down from the original 40.

Swap Hard Drives with Ubuntu 6.10

I recently upgrade my main workstation’s hard drive under Ubuntu 6.10 and noticed a couple things changed during the process. Here are my instructions for a fast and reliably hard drive swap. I’m using SATA drives without LVM using the default Ubuntu install and partition options for this howto.

  1. Prepare
    1. Shutdown your machine and install your new hard drive. Don’t mess with your current hard drive (yet)
    2. Find a LiveCD, I used the Ubuntu 6.10 LiveCD, it matched my OS, but it doesn’t have to. Knoppix should work fine.
    3. Boot using your LiveCD
    4. After booting, open a terminal and “sudo su” to become root
  2. Setup your new drive
    1. Use cfdisk /dev/sda to look at your partitions on your current drive. I have sda1 of type linux as most of my current drive, and a 6 GB sda5 as linux swap at the end
    2. Duplicate this on your new drive using cfdisk /dev/sdb, adjusting for space as necessary. I created a new primary partition using cfdisk using all but 6 GB of space, then a new logical partition using the rest of the space. You must create the partitions in this order to get the right numbering
    3. make the primary partition to bootable
    4. set the swap partition as type 82 (linux swap)
    5. save and quit
    6. create filesystems on the new partitions using mkfs.ext3 /dev/sdb1 and mkswap /dev/sdb5
  3. Copy data
    1. Make directories to mount your old and new partitions, in this case, /mnt/sda1 and /mnt/sdb1
    2. Mount your drives to these partitions using mount /dev/sda1 /mnt/sda1 and mount /dev/sdb1 /mnt/sdb1
    3. Copy all your data from your old drive to your new drive using cp -a /mnt/sda1/* /mnt/sdb1/ . The -a will preserve owners, permissions, date, etc.
    4. Get up and do something else. It took 70 minutes for my machine to copy about 150 GB of data from one drive to the other
  4. Fix the boot options
    1. This is where Ubuntu 6.10 differs from previous versions. Fstab and menu.lst both use UUID numbers to find partitions. To get the UUID number of your new partitions, run vol_id /dev/sdb1 and vol_id /dev/sdb5 . Copy these numbers into their appropriate places in your /mnt/sdb1/etc/fstab and /mnt/sdb1/boot/grub/menu.lst files. You may need to dig around the menu.lst to find all the entries.
    2. Now install grub onto the MBR of the new drive to make it bootable. To do this I first chroot into my new system using chroot /mnt/sdb1 /bin/bash . Now that you’re in the new system, run grub. Inside grub, run setup (hd1,0) then root (hd1) . This will differ if you have a different drive setup. Quit grub (quit).
  5. Finish up
    1. Logout of your chroot (logout), unmount your mounted drives umount /dev/sda1 and umount /dev/sdb1, and shutdown your computer. Disconnect your old drive, plug your new drive into the old drive’s cable, and start your computer back up. If everything went well, it will boot back up as if nothing happened.

A Couple Ways to Run FSCK on Ubuntu

My server decided that an executable file didn’t really exist on the file system, or so I thought. Lack of sleep was the main problem, but here are some things I did to check my file system for errors. I setup this file system on a Ubuntu 6.06 AMD64 install with LVM, so everything is in LVM instead of standard partitions.

# sudo e2fsck -n /dev/mapper/Ubuntu-root

This was showing errors, but I ran it while the system was mounted and running, so there were open files, so this was normal. The -n kept e2fsck from attempting to fix anything, which was good because later I ran the command after booting from an Ubuntu LiveCD and found no errors.

Before booting from the LiveCD I tried to get the system to fix itself by running fsck on boot. Two methods I used to do this on Ubuntu were running these from the live system before rebooting, they both accomplish the same thing, so only one was really needed.

# sudo touch /forcefsck

# sudo tune2fs -C 40 /dev/mapper/Ubuntu-root

These appeared to have no affect, probably because the filesystem was fine, but I took down the system and ran fsck from a LiveCD instead. Of course, this wasn’t as simple as it should have been, the LiveCD did not detect my LVM volumes, so /dev/mapper/Ubuntu-root was missing. The fix was to install LVM2 and start it up.

# sudo apt-get install lvm2
# sudo /etc/init.d/lvm start

The /dev/mapper/ entries then appeared and I could run all the fscks I wanted. At this point my fsck checks were coming out clean, so file system corruption was not to blame.

Add a Drive to an LVM Volume

This guide shows how to add a drive to an existing LVM volume.

  1. Erase the partition table on drive /dev/hdd and create the Physical volume

# dd if=/dev/zero of=/dev/hdd bs=1024k count=1
# pvcreate /dev/hdd

  1. Look at the current volume group, for fun

# sudo vgdisplay -A

— Volume group —
VG Name disks
System ID
Format lvm2
Metadata Areas 7
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 7
Act PV 7
VG Size 859.70 GB
PE Size 4.00 MB
Total PE 220084
Alloc PE / Size 220084 / 859.70 GB
Free PE / Size 0 / 0

VG UUID N4TcI6-DIRS-3edy-FAa0-tdUL-MTSX-bs2lJE

  1. Add the Physical Volume to the existing Volume Group, which I creatively named “disks”

# sudo vgextend disks /dev/hdd

  1. Look at the current Volume Group again, my how it has grown

# sudo vgdisplay -A

— Volume group —
-snip-
VG Size 1.11 TB
PE Size 4.00 MB
Total PE 291625
Alloc PE / Size 220084 / 859.70 GB
Free PE / Size 71541 / 279.46 GB

  1. Extend the Logical Volume, this time named “backup”, use the free extents reported by vgdisplay

# sudo lvextend -l+71541 /dev/disks/backup

Extending logical volume backup to 1.11 TB
Logical volume backup successfully resized

  1. And then look at vgdisplay again, whee

# sudo vgdisplay -A

— Volume group —
-snip-
VG Size 1.11 TB
PE Size 4.00 MB
Total PE 291625
Alloc PE / Size 291625 / 1.11 TB
Free PE / Size 0 / 0

  1. Now the final and most exciting step, expanding the filesystem. You’re using XFS right? And here’s a surprise, it should be mounted when you resize it. xfs_growfs will automatically resize the XFS filesystem to use all the available free space, and do it in less than a second.

# sudo xfs_growfs /backup