[Beowulf] cluster softwares supporting parallel CFD computing
James.P.Lux at jpl.nasa.gov
Sat Sep 16 10:56:09 EDT 2006
At 11:08 AM 9/15/2006, Patrick Geoffray wrote:
>Toon Knapen wrote:
>>For instance, I wonder if any real-life application got a 50% boost
>>by just changing the switch (and the corresponding MPI
>>implementation). Or, what is exactly the speedup observed by
>>switching from switch A to switch B on a real-life application?
>I could not agree more. We should always keep in mind that a
>parallel application mostly computes and, from time to time, send
>messages :-) Interconnect people often lose track of this, and using
>micro-benchmarks with no computation yields to some warped picture
>of the problems (message rate).
I don't think this is generically true.
Consider, for example, the difference between having a non-blocking
any-to-any interconnect and a blocking interconnect (e.g. shared
ethernet in a worst case scenario).
Many, many moons ago, I was developing software for a Intel i/PSC-1
to do simulations where stuff had to go from node to node, and
sometimes with multiple hops (it was a hypercube). The comm rate
strongly affected the performance of the simulation, because, often,
A would be doing a calculation that depended on something from B
which depended on something from C which depended on an earlier
result from A. It actually ran slower on the 'cube than on a single processor.
Yes, the software structure was badly designed for the
interconnect. HOWEVER, the whole point of computing resources is
that I really shouldn't have to design the whole software system
around the peculiarities of the hardware platform.
James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf