[clug] Hardware Assisted RAID (RAID through BIOS/Chipset RAID)

steve jenkin sjenkin at canb.auug.org.au
Tue Oct 9 22:39:51 MDT 2012


David Pisk wrote on 10/10/12 2:42 PM:

> I have done some research, and it
> appears that BIOS RAID is superior to operating system RAID.

Not in the last 10 years, in my experience.

I've had multiple disasters due entirely to hardware RAID, that can't
ever happen with software RAID.
"Performance" when applied to anything with rotating disks has changed
by more than 1000:1 in the last 10 years. Old prejudices no longer apply.

If your developers/sysadmin cannot configure your "BIOS RAID" remotely,
then they can't *fix* it when, not if, it breaks.

Choose an option that is guaranteed to not need on-site attendance.
I hate standing for hours on end in cold machine rooms with crappy
monitors and keyboards. Not only does it create OH&S issues (they get
RSI, you pay), but you don't do your best work and most of the time in
*waiting*. You can't do anything useful with the time when you're locked
in the Big Room.

Once upon a time, early 1990's, hardware RAID was necessary for
*performance*. Patterson et al published that fist paper in 1987/8, and
within 2 years there were commercial products on the market, eg. EMC and
Storage Technologies. Big, expensive bits of kit built around
multi-processor systems - they still sit in corporate machine rooms on
the end of SAN's.

And there were a few small hardware RAID systems - single boards with
the hard work of XOR & CRC's done in electronics.

We're 20 years on, the tradeoffs between hardware and CPU's are
completely different.

If you're buying systems that can't support reasonable throughput rates
with software RAID-1+0, then you've spec'd an underpowered system.
Processors are relatively cheap.

Why RAID-1+striping?
Disk is soooo cheap, we don't need RAID-5/6 for most applications.
Almost nobody fills 1Tb disks with data, only video.
Unless you're looking for 100Tb or more, you don't need to spend effort
trying to reduce the numbers of disks. Even at $200-$300 for
top-of-the-line disks, needing 2 or 3 fewer (RAID-6 vs RAID-1+0) will
only pay for a couple of hours of SysAdmin time. Recovering from a
failed hardware RAID disaster: budget 2 days downtime and 30-40 hours
SysAdmin time.

If you're looking for random I/O performance, you'd have spec'd some
SSD's, right? If you have a need for blindingly fast random I/O, you'd
have bought a Fusion-IO card...

If you want to shuffle large lumps of serial data, nothing comes close
to Hard Disk Drives, HDD's. They have their place, but are poor value
for money in others.

Here's a question or three for your SysAdmins spruiking hardware RAID:

 - How often does it check all the data on the HDD's for new errors?
 - How often does it rewrite all the data in a protected volume?
 - What support does it have for backups, restores and snapshots?

This is what high-end Storage Systems do as a matter of routine.

The higher the capacity of a disk drive, the smaller the bits. No
surprise there... But we are down to tracks measured in 10's of
nanometers... This is incredibly small and creates completely new
challenges.

Creating media that's *perfect* to this level of detail is impossible.
Creating media and heads where the data *stays* as you recorded it is
also impossible.

Summary: spontaneous unrecoverable disk errors are now inevitable.
Finding them quickly and rebuilding protected data before its corrupted
is a system management imperative.

If you're design does not include backups, restores and full scans and
rewrites, then its incomplete...

I won't mention data de-duplication, compression and encryption.
It's stuff you do in Software, not in cheap RAID hardware.

Of course, they're using Logical Volumes, aren't they...
IBM started licensing LVM in the early 1990's, it's a *game changer*, as
much as overlay mounts and snapshots.

If you haven't already, perhaps have someone with a different skill set
review the design you've already got. You may get some useful
suggestions out of it.

Maybe you need to look at some of the nice, cheap storage appliances out
there. They export SMB, NFS filesystems *and* iSCSI devices.
And with two of them, you can have off-site backups over the network,
with trivial and 'instant' failover in a DR scenario.

Hope this was helpful.

steve

-- 
Steve Jenkin, Info Tech, Systems and Design Specialist.
0412 786 915 (+61 412 786 915)
PO Box 48, Kippax ACT 2615, AUSTRALIA

sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin


More information about the linux mailing list