[clug] Hardware Assisted RAID (RAID through BIOS/Chipset RAID)

Edward C. Lang edlang at edlang.org
Mon Oct 29 06:14:17 MDT 2012


Hi,

On Mon, Oct 29, 2012 at 10:26:53PM +1100, Paul Wayper wrote:
> Performance measures?  Not to hand.  Rough guess: there won't be much in
> it.  LVM basically supports RAID0 (striping) and RAID1 (mirroring).  It
> doesn't do anything with parity, so it doesn't support the higher
> single-digit RAID versions.  If you really want, you can make a PV of a
> mirrored LV, and use two such PVs to create RAID10 storage using LVM.  I
> wouldn't bother, MD is much better at this.

Newer versions of LVM2 do actually support RAID1, 4, 5 and 6. It was
included as a technology preview in RHEL 6.3 [0] That's not to say things
entirely function as you might expect -- for example, I couldn't resize a
RAID5 logical volume [1]. In a traditional RAID5 array you'd simply add
another disk, or migrate to bigger disks. What are you supposed to do with
a logical volume that exists as a single, unreziable device?

I did (but didn't save) some rough bonnie++ benchmarking. From memory, XFS
on top of LVM2 RAID5 was fastest, followed by XFS on MD RAID5 on LVM2 LVs,
then ext4 on the pair. Maybe I didn't tune as well as a I could have...

> What I have done elsewhere is use MD to create RAID1 devices from two
> equal partitions, and then use LVM to stripe data across them.  The
> lights flashed, /proc/mdstat told me it works, it seemed fast, I didn't
> do any benchmarks.

I'd previously set up MD RAID10 with LVM sitting on top:

    http://edlang.org/not-yours/n40l-raid-layout.png

But that seemed wasteful, and then Ubuntu stopped automatically
recognising the second layer of RAID on booting, which combined with
Ubuntu's default desire to hang when unable to mount a fs, was enough of
an excuse to revisit the entire situation.

> Ed, what is your actual load here?  You're obviously interested in
> performance yet I'm wondering if the performance difference between
> RAID1 using LVM and RAID1 using MD is actually going to make any
> difference to your real-world load; any more than, say, investing in
> more RAM, or spending time optimising your application, or tuning your
> file system options.

A general home mix of media, VMs and backing up important stuff. At some
point in the next few months there'll be some large text file analayses
run using this storage.

Annoyed with the inflexibilty of LVM2's current RAID5 implementation, I
decided to have a look at running Debian GNU/kFreeBSD under KVM with the
idea of presenting LVs to the new VM, and then ZFS+iSCSI+NFS from it. Boy,
was I in for a rude shock: *BSD kernels do not play nicely with KVM in
terms of I/O. This is what I see under KVM:

ada0 at ata0 bus 0 scbus0 target 0 lun 0
ada0: <QEMU HARDDISK 1.1.2> ATA-7 device
ada0: 16.700MB/s transfers (WDMA2, PIO 8192bytes)
ada0: 8192MB (16777216 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad0

This is what I see under VMware player:

da0 at mpt0 bus 0 scbus2 target 0 lun 0
da0: <VMware, VMware Virtual S 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 8192MB (16777216 512 byte sectors: 255H 63S/T 1044C)

320MB/s v. 16.7MB/s is pretty significant! There are many blog entries and
mailing list posts complaining about this issue. The most useful one [2]
describes how to install and then use the virtio kernel driver on FreeBSD;
unfortunately, the same driver isn't so readily available on Debian. It's
really very painful trying to do anything, such as compiling, with such
inferior disk speeds. I did try to be clever by following that blog in a
VMware VM and then loading the resulting disk in KVM... it didn't really
work out in the short time I paid attention to it.

With a bit of luck I'll get FreeBSD working to my satisfaction under KVM.
If not, back to the drawing board.

> P.S. LVM mirroring is also used to migrate storage from one PV to
> another while the LV is active... which is pretty neat IMO.

Quite a few other UNIXs have had this for years! It's nice to see LVM2
catching up and then pulling ahead.

Regards,

Edward.

[0] https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html

[1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=691608

[2] http://viktorpetersson.com/2012/01/16/how-to-upgrade-freebsd-8-2-to-freebsd-9-0-with-virtio/

-- 

http://edlang.org/


More information about the linux mailing list