ZFS Cheatsheet

Create a RAID 1 style file system across two devices:

zpool create tank mirror /dev/sdb /dev/sdc

Create a RAIDZ2 file system with all the features:

zpool create -f tank -o ashift=12 raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd
zfs set compression=lz4 tank
zfs set xattr=sa tank
zfs set acltype=posixacl tank
zpool set autoexpand=on tank

Add another pair of mirrored drives to create an array similar to RAID 10:

zpool add tank mirror /dev/sdd /dev/sde

Display information about the file system:

zpool status
zpool list
zpool list -v
zfs list
zfs list -t all
zfs get all

Disable a drive:

zpool offline tank /dev/sdj

Replace an active drive:

zpool replace -f tank /dev/sds /dev/sdt

Scrub an array:

zpool scrub tank

Enable autoexpand for a zpool (allows growing arrays by replacing drives with larger models):

zpool set autoexpand=on tank

Enable file system compression:

zfs set compression=lz4 tank

Create a new file system:

zfs create tank/partition1

Remove a drive, even if resilvering

zpool detach tank sdc

Clear ZFS info from a drive

zpool labelclear /dev/sdt

Remove a drive from a mirror of sdc+sdd, then attach a new drive sde

zpool detach tank sdc
zpool attach -f tank sdd sde

Remove a drive (sdx) from a RAID-Z2, then attach a new drive (sdy)

zpool offline tank sdx
zpool replace tank sdx sdy

Wipe a Hard Drive

zero a drive with progress (change 1T to whatever drive size your zeroing):
# dd if=/dev/zero bs=1M | pv -s 1T | dd bs=1M of=/dev/sde

Erase the drive then read back the blocks to check for errors:
# badblocks -svw -t 0x00 /dev/sde

BTRFS Cheatsheet

Create a RAID 1 style file system across two devices:

mkfs.btrfs -d raid1 -m raid1 /dev/sdl /dev/sdn

Display information about the filesystem

btrfs fi show /mnt/btrfs
btrfs fi df /mnt/btrfs

Check the data while online:

btrfs scrub start /mnt/btrfs
btrfs scrub status /mnt/btrfs

Add a drive

btrfs device add /dev/sda /mnt/btrfs
btrfs balance start -d -m /mnt/btrfs

Remove a drive

btrfs device delete /dev/sdl /mnt/btrfs

Replace a drive

btrfs replace start /dev/sda /dev/sdl /mnt/btrfs

Convert to RAID 1 (both data and metadata)

btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs
btrfs balance status /mnt/btrfs

parted

Create a single, aligned partition on a drive larger than 2TB.

% sudo parted /dev/sdx
(parted)% mklabel gpt
(parted)% mkpart primary 1 -1

Linux Software RAID10 Benchmarks

Tests are done across four 7200RPM SATAII drives on a PCI-X card sitting on a PCI (32-bit, 133MB/sec theoretical max) bus, probably the slowest bus configuration possible, and then again after being moved to a motherboard with dual PCI-X slots. Server is running Ubuntu 9.10 AMD64 Server.

Benchmark is a simple ‘dd’ sequential read and write.

write: dd if=/dev/zero of=/dev/md2 bs=1M
read: dd if=/dev/md2 of=/dev/null bs=1M

mdadm –create /dev/md2 –verbose –level=10 –layout=n2 –raid-devices=4 /dev/sd[ftlm]1

PCI PCI-X
write: 13.2 MB/s 144 MB/s
read: 4.0 MB/s 89.3 MB/s

mdadm –create /dev/md2 –verbose –level=10 –layout=f2 –raid-devices=4 /dev/sd[ftlm]1

PCI PCI-X
write: 48.3 MB/s 131 MB/s
read: 92.7 MB/s 138 MB/s

mdadm –create /dev/md2 –verbose –level=10 –layout=o2 –raid-devices=4 /dev/sd[ftlm]1

PCI PCI-X
write: 47.4 MB/s 135 MB/s
read: 98.7 MB/s 142 MB/s

And more comparisons:

RAID1 (PCI)

write: 38.9 MB/s
read: 64.8 MB/s

Single Disk (PCI)

write: 59.4 MB/s
read: 71.9 MB/s

Replace an LVM Drive with a Larger One

LVM allows you to hot add devices to expand volume space. It also allows you to hot remove devices, as long as there are enough free extents in the volume group (vgdisplay) to move data around. Here I’m going to replace a 400 GB drive (sdg) with a 750 GB one (sdh) from logical volume “backup” on volume group “disks”. It does not matter how many hard drives are in the volume group, and the filesystem can stay mounted.

  1. Partition and create a physical volume on the device
    1
    
    $ sudo pvcreate /dev/sdh1
  2. Add the new drive to the volume group
    1
    
    $ sudo vgextend disks /dev/sdh1
  3. Move all extents from the old drive to the new one (this step may take hours)
    1
    
    $ sudo pvmove -v /dev/sdg1
  4. Remove the old drive
    1
    
    $ sudo vgreduce disks /dev/sdg1
  5. Expand the logical volume to use the rest of the disk. In this case, another 350GB.
    1
    
    $ sudo lvextend -l+83463 /dev/disks/backup
  6. Expand the file system
    1
    2
    
    $ sudo resize2fs /dev/disks/backup
    $ sudo xfs_growfs /dev/disks/backup

mdadm Cheatsheet

Scan a system for RAID arrays and save findings so the array reappears across reboots:

1
# mdadm --detail --scan && /etc/mdadm/mdadm.conf

Create a RAID5 array out of sdm1, sdj1, and a missing disk (all partitioned with raid-autodetect partitions)

1
# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sd[mj]1 missing

Create a RAID1 array

1
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ts]1

Remove a RAID array

1
2
# mdadm --stop /dev/md1
# mdadm --zero-superblock /dev/sd[ts]1

Replace a failed drive that has been removed from the system

1
# mdadm /dev/md3  --add /dev/sdc1 --remove detached

Add a new drive to an array, and remove an existing drive at the same time

1
# mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1

Add a drive to a RAID 5 array, growing the array size

1
2
# mdadm --add /dev/md1 /dev/sdm1
# mdadm --grow /dev/md1 --raid-devices=4

Fixing an incorrect /dev/md number (ie /dev/md127)

1. Remove any extra parameters for the array except for UUID in /etc/mdadm/mdadm.conf. Ex.

1
2
#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 UUID=839813e7:050e5af1:e20dc941:1860a6ae
ARRAY /dev/md1 UUID=839813e7:050e5af1:e20dc941:1860a6ae

2. Then rebuild the initramfs

1
sudo update-initramfs -u

MDADM Versions

distro kernel version mdadm version
Ubuntu 6.06 LTS 2.6.15 1.12.0
Ubuntu 7.04 2.6.20 2.5.6
Ubuntu 8.04 LTS 2.6.24 2.6.3
Ubuntu 8.10 2.6.27 2.6.7
Ubuntu 9.04 2.6.28 2.6.7.1
Ubuntu 10.04 LTS 2.6.32 2.6.7.1
Ubuntu 10.10 2.6.35 2.6.7.1
Ubuntu 11.04 2.6.38 3.1.4
Ubuntu 12.04 3.2.0 3.2.3
CentOS 4.5 2.6.9 1.12.0
CentOS 5.0 2.6.18 2.5.4
CentOS 6.0 2.6.32 3.1.3
Debian 4.0 2.6.18 2.5.6
Debian 5.0 2.6.26 2.6.7.2
Debian 6.0 2.6.32 3.1.4
Fedora 7 2.6.21 2.6.1
Fedora 15 2.6.38 3.1.5

MDADM 2.x on kernels >2.6.17 supports online resizing of RAID 5 arrays :)