[Beowulf] PVFS on 80 proc (40 node) cluster
jeffrey.b.layton at lmco.com
Mon Nov 1 12:36:29 EST 2004
Robert Latham wrote:
>On Sun, Oct 31, 2004 at 10:14:44PM -0500, Brian Smith wrote:
>>PVFS2 has much improved fault tolerance over PVFS1 in that there can be
>>redundant file nodes where as with PVFS1, if one node dropped dead, your
>>FS was toast.
Let me jump in with Rob to explain that if you lose a node for PVFS,
you file system is not toast. Yes, any data that was written to that disk
won't be retrievable. Also, any data that was being actively read or written
when the node is lost is also gone. However, PVFS will continue to
function with just fewer nodes. Any new files won't use the missing
nodes and PVFS will continue on it's merry way.
In the case of PVFS1 this is not true for the metadata server. If you
lose it, PVFS goes down. However, a simple mirror of the disk
and if you like a HA of the metadata server will protect against that.
Remember that PVFS is a high-speed *scratch* file system. So, you
write your files to it and then copy them off of PVFS to a more
resilient file system or backup (see Rob's comment below).
>Please don't let the lack of software redundancy scare you off! Many
>many sites have run PVFS and not found reliability to be a problem.
>Your application can do its I/O, writing out checkpoints or reading
>datafiles or whatever IO it does to PVFS. After your application
>runs, move the data to tape or long-term storage at your liesure.
>PVFS is fast scratch space, and as long as you treat it as such,
>everything should work just fine.
Dr. Jeff Layton
Aerodynamics and CFD
Lockheed-Martin Aeronautical Company - Marietta
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf