[Beowulf] scratch File system for small cluster
tjrc at sanger.ac.uk
Thu Sep 25 11:06:01 EDT 2008
On 25 Sep 2008, at 3:19 pm, Joe Landman wrote:
> BLAST uses mmap'ed IO. This has some interesting ...
> interactions ... with parallel file systems.
It's not *too* bad on Lustre. We use it in production that way.
>> Are there other recommendations for fast scratch space (it doesn't
>> have to
>> be a parallel file system, something with less hardware would be
> Pure software: GlusterFS currently, ceph in the near future. GFS
> won't give you very good performance (meta-data shuttling limits
> what you can do). You could go Lustre, but then you need to build
> MDS/ODS setups so this is hybrid.
Lustre still has some interesting performance corners. Random access
with small reads is weak, so don't try putting DBM files on it, for
> Pure hardware: Panasas (awesome kit, but not for the light-of-
> wallet), DDN, Bluearc (same comments for these as well).
We have seen some scaling/stability issues with BlueArc NFS heads, at
least on our SAN hardware. At the scale the OP is suggesting though,
it'll be fine (and they certainly are fast).
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registered in England with number 1021457 and a
company registered in England with number 2742969, whose registered
office is 215 Euston Road, London, NW1 2BE.
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf