Archive for the ‘file systems’ Category

ZFS Cheatsheet

July 9th, 2016 No comments

Create a RAID 1 style file system across two devices:

zpool create tank mirror /dev/sdb /dev/sdc

Add another pair of mirrored drives to create an array similar to RAID 10:

zpool add tank mirror /dev/sdd /dev/sde

Display information about the file system:

zpool status
zpool list
zpool list -v
zfs list
zfs list -t all
zfs get all

Disable a drive:

zpool offline tank /dev/sdj

Replace an active drive:

zpool replace -f tank /dev/sds /dev/sdt

Scrub an array:

zpool scrub tank

Enable autoexpand for a zpool (allows growing arrays by replacing drives with larger models):

zpool set autoexpand=on tank

Enable file system compression:

zfs set compression=lz4 tank

Create a new file system:

zfs create tank/partition1
Categories: cheatsheet, file systems, ZFS

Wipe a Hard Drive

April 23rd, 2014 No comments

zero a drive with progress (change 1T to whatever drive size your zeroing):
# dd if=/dev/zero bs=1M | pv -s 1T | dd bs=1M of=/dev/sde

Erase the drive then read back the blocks to check for errors:
# badblocks -svw -t 0x00 /dev/sde

Categories: file systems

BTRFS Cheatsheet

April 20th, 2014 No comments

Create a RAID 1 style file system across two devices:

mkfs.btrfs -d raid1 -m raid1 /dev/sdl /dev/sdn

Display information about the filesystem

btrfs fi show /mnt/btrfs
btrfs fi df /mnt/btrfs

Check the data while online:

btrfs scrub start /mnt/btrfs
btrfs scrub status /mnt/btrfs

Add a drive

btrfs device add /dev/sda /mnt/btrfs
btrfs balance start -d -m /mnt/btrfs

Remove a drive

btrfs device delete /dev/sdl /mnt/btrfs

Replace a drive

btrfs replace start /dev/sda /dev/sdl /mnt/btrfs

Convert to RAID 1 (both data and metadata)

btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs
btrfs balance status /mnt/btrfs
Categories: cheatsheet, file systems


April 25th, 2011 No comments

Create a single, aligned partition on a drive larger than 2TB.

% sudo parted /dev/sdx
(parted)% mklabel gpt
(parted)% mkpart primary 1 -1
Categories: file systems, howto

Linux Software RAID10 Benchmarks

January 31st, 2010 1 comment

Tests are done across four 7200RPM SATAII drives on a PCI-X card sitting on a PCI (32-bit, 133MB/sec theoretical max) bus, probably the slowest bus configuration possible, and then again after being moved to a motherboard with dual PCI-X slots. Server is running Ubuntu 9.10 AMD64 Server.

Benchmark is a simple ‘dd’ sequential read and write.

write: dd if=/dev/zero of=/dev/md2 bs=1M
read: dd if=/dev/md2 of=/dev/null bs=1M

mdadm –create /dev/md2 –verbose –level=10 –layout=n2 –raid-devices=4 /dev/sd[ftlm]1

write: 13.2 MB/s 144 MB/s
read: 4.0 MB/s 89.3 MB/s

mdadm –create /dev/md2 –verbose –level=10 –layout=f2 –raid-devices=4 /dev/sd[ftlm]1

write: 48.3 MB/s 131 MB/s
read: 92.7 MB/s 138 MB/s

mdadm –create /dev/md2 –verbose –level=10 –layout=o2 –raid-devices=4 /dev/sd[ftlm]1

write: 47.4 MB/s 135 MB/s
read: 98.7 MB/s 142 MB/s

And more comparisons:


write: 38.9 MB/s
read: 64.8 MB/s

Single Disk (PCI)

write: 59.4 MB/s
read: 71.9 MB/s

Categories: file systems, server

Remove Stale LVM Devices

January 23rd, 2010 No comments

Have an LVM device left on your system from a drive that was removed before pvremove was run?

$ sudo dmsetup remove /dev/mapper/removed-device
Categories: file systems, LVM

Replace an LVM Drive with a Larger One

March 21st, 2009 2 comments

LVM allows you to hot add devices to expand volume space. It also allows you to hot remove devices, as long as there are enough free extents in the volume group (vgdisplay) to move data around. Here I’m going to replace a 400 GB drive (sdg) with a 750 GB one (sdh) from logical volume “backup” on volume group “disks”. It does not matter how many hard drives are in the volume group, and the filesystem can stay mounted.

  1. Partition and create a physical volume on the device
    $ sudo pvcreate /dev/sdh1
  2. Add the new drive to the volume group
    $ sudo vgextend disks /dev/sdh1
  3. Move all extents from the old drive to the new one (this step may take hours)
    $ sudo pvmove -v /dev/sdg1
  4. Remove the old drive
    $ sudo vgreduce disks /dev/sdg1
  5. Expand the logical volume to use the rest of the disk. In this case, another 350GB.
    $ sudo lvextend -l+83463 /dev/disks/backup
  6. Expand the file system
    $ sudo resize2fs /dev/disks/backup
    $ sudo xfs_growfs /dev/disks/backup
Categories: file systems, howto, LVM

mdadm Cheatsheet

November 4th, 2008 No comments

Scan a system for RAID arrays and save findings so the array reappears across reboots:

# mdadm --detail --scan && /etc/mdadm/mdadm.conf

Create a RAID5 array out of sdm1, sdj1, and a missing disk (all partitioned with raid-autodetect partitions)

# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sd[mj]1 missing

Create a RAID1 array

# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ts]1

Remove a RAID array

# mdadm --stop /dev/md1
# mdadm --zero-superblock /dev/sd[ts]1

Replace a failed drive that has been removed from the system

# mdadm /dev/md3  --add /dev/sdc1 --remove detached

Add a new drive to an array, and remove an existing drive at the same time

# mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1

Add a drive to a RAID 5 array, growing the array size

# mdadm --add /dev/md1 /dev/sdm1
# mdadm --grow /dev/md1 --raid-devices=4

Fixing an incorrect /dev/md number (ie /dev/md127)

1. Remove any extra parameters for the array except for UUID in /etc/mdadm/mdadm.conf. Ex.

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 UUID=839813e7:050e5af1:e20dc941:1860a6ae
ARRAY /dev/md1 UUID=839813e7:050e5af1:e20dc941:1860a6ae

2. Then rebuild the initramfs

sudo update-initramfs -u
Categories: cheatsheet, file systems

hdparm -t /dev/md0

June 22nd, 2007 No comments

Timing buffered disk reads: 1248 MB in 3.00 seconds = 415.65 MB/sec

Categories: file systems

MDADM Versions

June 16th, 2007 No comments
distro kernel version mdadm version
Ubuntu 6.06 LTS 2.6.15 1.12.0
Ubuntu 7.04 2.6.20 2.5.6
Ubuntu 8.04 LTS 2.6.24 2.6.3
Ubuntu 8.10 2.6.27 2.6.7
Ubuntu 9.04 2.6.28
Ubuntu 10.04 LTS 2.6.32
Ubuntu 10.10 2.6.35
Ubuntu 11.04 2.6.38 3.1.4
Ubuntu 12.04 3.2.0 3.2.3
CentOS 4.5 2.6.9 1.12.0
CentOS 5.0 2.6.18 2.5.4
CentOS 6.0 2.6.32 3.1.3
Debian 4.0 2.6.18 2.5.6
Debian 5.0 2.6.26
Debian 6.0 2.6.32 3.1.4
Fedora 7 2.6.21 2.6.1
Fedora 15 2.6.38 3.1.5

MDADM 2.x on kernels >2.6.17 supports online resizing of RAID 5 arrays :)

Categories: file systems