RAID0/1 speeds

2021, Dec 03    

We discussed RAID0 and RAID 1 speeds, and which was faster, so I decided to make a vagrant box to test it.

Background

There are different approaches to RAID. Some systems support hardware RAID, and all support software RAID since it is a part of the OS. Pros and cons of these is out-of-scope for this blog entry.

On a server, I have a RAID5 system using mdadm (software RAID).

prolle# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3160 MB in  2.00 seconds = 1580.26 MB/sec
 Timing buffered disk reads: 402 MB in  3.01 seconds = 133.73 MB/sec
prolle# hdparm -tT /dev/md1

/dev/md1:
 Timing cached reads:   2932 MB in  2.00 seconds = 1466.33 MB/sec
 Timing buffered disk reads: 992 MB in  3.00 seconds = 330.47 MB/sec

This is an old system, but it show the massive improvement that you may have when using RAID.

On my desktop, I use LVM

root@ioto:~# hdparm -tT /dev/sda1 /dev/sdb1 /dev/mapper/ioto--slow--storage-generic 

/dev/sda1:
 Timing cached reads:   15552 MB in  2.00 seconds = 7789.85 MB/sec
 Timing buffered disk reads: 384 MB in  3.01 seconds = 127.51 MB/sec

/dev/sdb1:
 Timing cached reads:   15220 MB in  2.00 seconds = 7623.21 MB/sec
 Timing buffered disk reads: 406 MB in  3.01 seconds = 135.06 MB/sec

/dev/mapper/ioto--slow--storage-generic:
 Timing cached reads:   13658 MB in  2.00 seconds = 6839.76 MB/sec
 Timing buffered disk reads: 406 MB in  3.01 seconds = 134.71 MB/sec

We see no performance improvements. My guess is that LVM is in some sort of JBOD mode - in LVM parlance this is “linear mode”

Running pvscan -v confirms this.

  PV /dev/sdb1        VG ioto-slow-storage   lvm2 [<2.73 TiB / 1.26 TiB free]
  PV /dev/sda1        VG ioto-slow-storage   lvm2 [<931.51 GiB / <931.51 GiB free]

To get performance increase like we expect on RAID0, use “striped” mapping mode. There are accessible guides to do this.

LVM support striping across multiple devices, and, opposed to RAID0, can use the spare space got other volumes.

My LVM setup is a linear setup. This is ok for me, since 1) I don’t need performance, 2) disks are differents sizes and 3) I expect to add more disks later on.

To me the real question is when to use mdadm vs. when to LVM.

mdadm or lvm?

I reading

Unsurprisingly, my confirmation bias tells me that my hunch was correct. mdadm is good for regular RAID setups with similar disks, and then you put LVM on top of the virtual storage device. LVM is good when you have storage and want flexibility in partition sizes.

LVM is also relevant for virtual machines as alternative to qcow images. This is also out-of-scope.

Vagrant test setup

Yes, I know it is a bad idea because the different virtual disks will reside on the same physical disk. I found a blog post and decided to test RAID in VMs. No, I will not do this in production.

The repo is here. Do a vagrant up and it will create the VM and do the benchmarking. It uses libvirt.

In conclusion

  • Not doing RAID is faster than doing RAID
  • Virtual disks are tested by hdparm as being 3x faster than the underlying physical disk
  • Setting up RAID is trivial - testing failures might be problematic though

Update 2022-04-04: Remove dead Suse documentation link related to LVM and qcow. Update 2023-04-07: Remove dead tomlankhorst.nlk link related to mdadm and lvm