[Beowulf] Surviving a double disk failure
billycrook at gmail.com
Fri Apr 10 14:05:29 EDT 2009
On Fri, Apr 10, 2009 at 12:27, Joe Landman
<landman at scalableinformatics.com> wrote:>
> We have 1 customer using 24 drives (22 for data with 2 hot spares) as an md
> raid6 on DeltaV. Normally we'd suggest something smaller (collections of
> RAID6 and then striping across them to form RAID60's).
> With late model kernels, mdadm, and using 1.2 metadata on the md's, you
> should be able to build fairly sizeable stripe width devices. I've heard
> limits of 255, but never tested this far out.
Citing a 2004 paper by H. Peter Anvin entitled 'The mathematics of
RAID-6', Raid-6 (as implemented in the Linux kernel) can support a
maximum of 257 drives. (255 drives worth of data, and 2 drives worth
of parity, distributed evenly of course). It's a limitation of the
Galois Field algebra upon with raid6 is based in Linux.
To determine how many disks you want in a raid array, do some research
on that model's MTBF and sustained throughput; and make sure the
probability of an unrecoverable error happening on any one of the
drives during rebuild is low enough for your comfort. As a very,
very, general rule, you might put no more than 8TB in a raid5, and no
more than 16TB in a raid6, including what's used for parity, and
assuming magnetic, enterprise/raid drives. YMMV, Test all new drives,
keep good backups, etc...
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf