PVFS for scratch? (was lots of other things)

Robert Ross rross at mcs.anl.gov
Fri Dec 7 18:08:40 EST 2001


Selva,

Performance of PVFS depends greatly on access pattern and workload, just
as with any file system.  PVFS tends to do well with large accesses and
handles concurrent access well.

What do access patterns of your application(s) look like?  Are they coded
to use the UNIX interface, MPI-IO, or some higher-level interface?  How
big a cluster are we talking about using (sorry, didn't try to backtrace
the thread)?  These all matter quite a bit.

I'd be happy to talk with you more about this, or you could jump on the
pvfs-users mailing list (info off http://www.parl.clemson.edu/pvfs/) and
ask for some more input there.

Regards,

Rob
---
Rob Ross, Mathematics and Computer Science Division, Argonne National Lab


On Fri, 7 Dec 2001, Selva Nair wrote:

> 
> (starting a new thread snipping off the diskless one)
> 
> On Fri, 7 Dec 2001, Troy Baer wrote:
> >
> > Well, there's nothing keeping you from keeping the root filesystem on NFS
> > and using local disk for swap, /tmp, and /var.  We do that on our Pentium
> > III and Itanium clusters, and it seems to work pretty well.  The biggest
> > problem is user education ("No, 'cp file /tmp' does not copy it out to all
> > the nodes in your job, use pbsdcp instead").
> >
> >       --Troy
> 
> Ah, that brings us back to the happy world of "diskfull nodes" !
> Speaking of ease of use, PVFS spread over all the local disks
> is an alternative. Is anyone using PVFS with applications
> such as G98 that scratch a lot? Wonder how does it compare in
> performance with NFS3 or purely local storage. Any input would
> be helpful.
> 
> Thanks,
> 
> Selva

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list