building a RAID system - 8 drives - drive-net - tapes

Bill Broadley bill at math.ucdavis.edu
Fri Oct 10 01:43:57 EDT 2003


On the hardware vs software RAID thread.  A friend needed a few TB and
bought a high end raid card (several $k), multiple channels, enclosure,
and some 10's of 73GB drives for somewhere in the $50k-$100k neighborhood.

He needed the capacity and a minumum of 50MB/sec sequential write
performance (on large sequential writes).  He didn't get it.  Call #1 to
dell resulted in well it's your fault, it's our top of the line, it should
be plenty fast, bleah, bleah, bleah.  Call #2 lead to an escalation to
someone with more of a clue, tune paramater X, tune Y, try a different
raid setup, swap out X, etc.  After more testing without helping call #3
was escalated again someone fairly clued answered.  The conversation went
along the lines of what, yeah, it's dead slow.  Yeah most people only
care about the reliability.  Oh performance?  We use linux + software
raid on all the similar hardware we use internally at Dell.

So the expensive controller was returned, and 39160's were used in it's
place (dual channel U160) and performance went up by a factor of 4 or
so.  

In my personal benchmarking on a 2 year old machine with 15 drives
I managed 200-320 MB/sec sustained (large sequential read or write),
depending on filesystem and strip size.  I've not witnessed any "scaling
problems", I've been quite impressed with linux software raid under
all conditions and have had it run significantly faster then several
expensive raid cards I've tried over the years.  Surviving hotswap, over
500 day uptimes, and substantial performance advantages seem to be common.

Anyone have numbers comparing hardware and software raid using bonnie++
for random access or maybe postmark (netapp's diskbenchmark)

Failures so far:
* 3ware 6800 (awful, evil, slow, unreliable, terrible tech support)
* quad channel scsi card from Digital/storage works, rather slow, then started
  crashing   
* More recently (last 6 months) the top of the line dell raid card (PERSC?)
* A few random others

One alternative solution I figured I'd mention is the Apple 2.5 TB array
for $10-$11k isnt' a bad solution for a mostly turnkey, hotswap, redundant
powersupply setup with a warranty.  Dual 2 Gigabit Fiber channels does make
it easier to scale to 10's of TB's then some other solutions.  I managed
70 MB/sec read/write to a 1/2 Xraid (on a single FC).  Of course there
are cheaper solutions.

Oh, I also wanted to mention one gotcha for the DIY methods.  I've had
I think 4 machines now with 8-15 disks, and dual 400 watt powersupplies
or 3x225 watt (n+1) boot just fine for 6 months, but start complaining
at boot due to to high power consumption.  This is of course especially
bad with EIDEs since they all spin up at boot (SCSI can usually be spun
up one at a time).  I suspect a slight decrease in lubrication and or
degradation in the powersupplies which were possibly running above 100%
to be the cause.

In any case great thread, I've yet to see a performance or functionality
benefit from hardware raid.

-- 
Bill Broadley
Mathematics
UC Davis
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list