diskless node + g98?
landman at scalableinformatics.com
Thu Jan 23 13:32:42 EST 2003
G98 tends to be an I/O bound code depending upon the nature and size
of the calculation. Having non-local I/O means that it will likely be
slow if the network is not a gigabit or similar speed technology
(Myrinet, SCALI, etc).
Local I/O with cheap 7200 RPM IDE UltraATA/100 disks can hit 30+ MB
sustained large block sequential reads, and mid 20's for similar writes.
A 100 Base T will limit you to at best 12 MB/s reads or writes, and
there will be other gating factors (load of file server, load of net,
etc). A gigabit speed network (~100 MB/s) will push the gate back to
the server rather than the net, as the server would be asked in theory
to supply 12 concurrent gigabit speed links. Needless to say that few
servers can handle 1/6 of that. Your performance curve will be
different, and probably single nodes will perform better than single
nodes with disk (if you server file system, memory, and PCI busses are
fast), but the performance would likely be asymptotically similar to the
100 Base T as you scale up.
OTOH, if you load a single IDE drive into each node (generally cheap and
fast, 5-10 minutes per node) with the approriate cabling, you can have
each cluster do local I/O at local I/O speeds. Conversely you could buy
a scalable storage appliance to get there (they are starting to come
out, not talking about NAS or SAN) when they are ready.
For your work, my guess would be the IDE drive would be the highest
Joseph Landman, Ph.D
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://scalableinformatics.com
phone: +1 734 612 4615
lmathew at okstate.edu wrote:
>Beowulf list readers:
>I have a Beowulf cluster (12 diskless nodes, 1 fileserver/master) with 26 processors (total) that is configured to run computational simulations in both parallel and serial (pretty standard for this list). I am interested in utilizing my cluster to run a series of serial g98 calculations on each node. These calcualtions (as many of you know) require a "scratch" space. How can this scratch space be provided to a diskless node? Here are a few options that I have identified.
>1). Mount a LARGE ram drive? (1GB in size if possible??)
>2). Install hard disk drives in each of the slave nodes? (unattractive)
>3). Use a drive mounted via NFS/PVNFS? (large amount of communication)
>Has anyone encountered this? If so...what was the workaround that was implemented? I am open to any suggestions and comments. :)
>Mechanical and Aerospace Engineering
>Oklahoma State University
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf