ZFS Cheatsheet

Create a RAID 1 style file system across two devices:

zpool create tank mirror /dev/sdb /dev/sdc

Create a RAIDZ2 file system with all the features:

zpool create -f tank -o ashift=12 raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd
zfs set compression=lz4 tank
zfs set xattr=sa tank
zfs set acltype=posixacl tank
zpool set autoexpand=on tank

Add another pair of mirrored drives to create an array similar to RAID 10:

zpool add tank mirror /dev/sdd /dev/sde

Display information about the file system:

zpool status
zpool list
zpool list -v
zfs list
zfs list -t all
zfs get all

Disable a drive:

zpool offline tank /dev/sdj

Replace an active drive:

zpool replace -f tank /dev/sds /dev/sdt

Scrub an array:

zpool scrub tank

Enable autoexpand for a zpool (allows growing arrays by replacing drives with larger models):

zpool set autoexpand=on tank

Enable file system compression:

zfs set compression=lz4 tank

Create a new file system:

zfs create tank/partition1

Remove a drive, even if resilvering

zpool detach tank sdc

Clear ZFS info from a drive

zpool labelclear /dev/sdt

Remove a drive from a mirror of sdc+sdd, then attach a new drive sde

zpool detach tank sdc
zpool attach -f tank sdd sde

Remove a drive (sdx) from a RAID-Z2, then attach a new drive (sdy)

zpool offline tank sdx
zpool replace tank sdx sdy

BTRFS Cheatsheet

Create a RAID 1 style file system across two devices:

mkfs.btrfs -d raid1 -m raid1 /dev/sdl /dev/sdn

Display information about the filesystem

btrfs fi show /mnt/btrfs
btrfs fi df /mnt/btrfs

Check the data while online:

btrfs scrub start /mnt/btrfs
btrfs scrub status /mnt/btrfs

Add a drive

btrfs device add /dev/sda /mnt/btrfs
btrfs balance start -d -m /mnt/btrfs

Remove a drive

btrfs device delete /dev/sdl /mnt/btrfs

Replace a drive

btrfs replace start /dev/sda /dev/sdl /mnt/btrfs

Convert to RAID 1 (both data and metadata)

btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs
btrfs balance status /mnt/btrfs

mdadm Cheatsheet

Scan a system for RAID arrays and save findings so the array reappears across reboots:

# mdadm --detail --scan && /etc/mdadm/mdadm.conf

Create a RAID5 array out of sdm1, sdj1, and a missing disk (all partitioned with raid-autodetect partitions)

# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sd[mj]1 missing

Create a RAID1 array

# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ts]1

Remove a RAID array

# mdadm --stop /dev/md1
# mdadm --zero-superblock /dev/sd[ts]1

Replace a failed drive that has been removed from the system

# mdadm /dev/md3  --add /dev/sdc1 --remove detached

Add a new drive to an array, and remove an existing drive at the same time

# mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1

Add a drive to a RAID 5 array, growing the array size

# mdadm --add /dev/md1 /dev/sdm1
# mdadm --grow /dev/md1 --raid-devices=4

Fixing an incorrect /dev/md number (ie /dev/md127)

1. Remove any extra parameters for the array except for UUID in /etc/mdadm/mdadm.conf. Ex.

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 UUID=839813e7:050e5af1:e20dc941:1860a6ae
ARRAY /dev/md1 UUID=839813e7:050e5af1:e20dc941:1860a6ae

2. Then rebuild the initramfs

sudo update-initramfs -u