[Beowulf] Storage - the end of RAID?

Joe Landman landman at scalableinformatics.com
Fri Oct 29 16:10:11 EDT 2010


On 10/29/2010 03:02 PM, Ellis H. Wilson III wrote:
> On 10/29/10 13:18, Greg Lindahl wrote:
>> On Fri, Oct 29, 2010 at 05:42:39PM +0100, Hearns, John wrote:
>>
>>> Quite a perceptive article on ZDnet
>>>
>>> http://www.zdnet.com/blog/storage/the-end-of-raid/1154?tag=nl.e539
>>
>> This has been going on for a long time. Blekko has 5 petabytes of
>> disk, and no RAID anywhere. RAID went out with SQL. Kinda funny that
>> HPC is slower to abandon RAID than other kinds of computing...

The danger in broad sweeping generalizations is that they tend to be 
incorrect (yes, a recursive joke ... I went there ...)

More seriously, much of business is decidedly *not* abandoning RAID 
(note:  we don't care, we sell storage either way, with or without 
RAID).  More to the point, many folks can't get their head around 
"losing" storage to RAID10 (e.g. mirroring with striping).  Actually, 
the business folks are generally fairly averse to the concept of such 
replication.

I explain it like this.  RAID (for resiliency) is there to simply buy 
you time to replace a failed drive.  Nothing else.  RAID for performance 
(various combinations of striping with varying resiliency) is there to 
reduce the impact of a single slow drive on the RAID calculations.  You 
can effectively parallelize the computation across multiple drives all 
speaking about 50-150 MB/s  (in the case of spinning rust), and hide the 
latency of multiple writes being queued.  With the RAID5/RAID6 
calculation, you also have some level of erasure coding.

... but ....

RAID IS NOT A BACKUP (can't say how many times I've had to say this to 
customers).  It can (and does) occasionally fail.  The only *guaranteed* 
way to prevent the failure from increasing entropy significantly in the 
universe is to have a recent copy of all the relevant data.

Which is RAID1 all over again.

RAID (re)builds take a long time.  This has to do with the design of 
RAID.  There are some techniques that will only rebuild used blocks, 
which is great, though irrelevant once you cross the 50% utilization 
line on your storage.  Your data is at higher risk during these rebuilds 
unless you have a recent backup (e.g. mirror bit level copy).

Neat how it always gets back to making a copy.

This said, many businesses buy a single RAID and then never back it up. 
  We try warning them.  No use.  That is, until something happens, and 
we get calls to our support line.


> I think it's making a pretty wild assumption to say search engines and
> HPC have the same I/O needs (and thus can use the same I/O setups). If
> RAID isn't gone from the domain, there is probably a pretty good reason
> for it. Also, I'd be blown away if Blekko wasn't doing it's own
> striping/redundancy - even if they aren't using RAID 0 or 1 by the book,
> they probably are using the same concepts (though hand-spun for search
> engine needs). I don't think the "whole internet" takes up 5 petabytes,
> so they probably have a couple copies for redundancy and performance or
> heterogeneous disk arrays to service more/less accessed items on the net.
>

It almost doesn't matter how you replicate, as long as a) you do, and b) 
they are bit level copies, and c) they are recent enough to be 
meaningful.  RAID1 is "instantaneous" copying.  There are degrees 
outside of that (snapshots and backups of same).




-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



More information about the Beowulf mailing list