[Beowulf] Big storage
ballen at gravity.phys.uwm.edu
Fri Aug 31 06:26:48 EDT 2007
I'm planning to buy a handful (8 to 16) x4500's a bit later this year, so
I am quite interested in the details.
> All RAID operations are done in software by ZFS (which acts as both a
> filesystem and a volume manager).
> ZFS has a "scrub" command that does a background scan of a pool (a set
> of disks), it's not done automatically by default but can be automated
> very easily.
> The "scrub" also verifies the parity/checksum of the data blocks.
> There is a example of this in one the ZFS demos on the OpenSolaris web
> site: <http://www.opensolaris.org/os/community/zfs/demos/selfheal/>.
> The data found to be damaged during a "scrub" is corrected/repaired
> (immediatly) using the various parity/checksum/mirror information
> available (see the "zpool" man page on the OpenSolaris web site).
> ZFS also has more redundancy than just RAID-5 or RAID-6 (when using
> "raidz2" in the latter case), especially in the newer ZFS versions.
> The "ditto blocks" for instance, allow you to have multiple copies of
> the same data or metadata.
In a system with 24 x 500 GB disks, I would like to have usable storage of
20 x 500 GB and use the remaining disks for redundancy. What do you
recommend? If I understand correctly I can't boot from ZFS so one or more
of the remaining 4 disks might be needed for the OS.
> Another nice feature is the "parity check on read" (similar to what DDN
> disk controllers do).
> I am not a Sun employee or share holder, we just have many X4500 (soon
> more than 100, probably around 130 at the end of the year).
> We've had X4500s in "production" for almost a year and they have
> proved to be very reliable, very fast and globally pretty cheap.
I have heard similar reports from my collaborators at a few other
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf