very high bandwidth, low latency manner?
joachim at lfbs.RWTH-Aachen.DE
Thu Apr 18 04:56:59 EDT 2002
Markus Fischer wrote:
> I don't think we have a performance bug. We have developed
> a real world application using frequent communication and
> have tested/run it on multiple systems.
I think Hakon was thinking of a performance bug in ScaMPI (the MPI
library), not in your application.
> No, I said that with larger numbers of nodes (I would like to talk
> about >100 , but here I mean more than 16) the scalability is limited
> (amount spent in communication increases significantly and speedup
> values decrease after a certain number of nodes) and yes
> the startup time also increases, which I thought to be caused
> by the SCI mechanisms of exporting/mapping mem).
If you could give some numbers, it would help very much. And which kind
of communication pattern is used in this application? Which MPI
communication calls, which message sizes?
> there is also PD SCI-MPICH which from reading papers applies for
> the same statement.
I am the author of SCI-MPICH. I do not understand the meaning of this
sentence of yours ("applies for the same statement"). What are you
Anyway, I would be happy to test your application with SCI-MPICH on our
cluster. You may just want to sent me an object file linked to dynamic
MPICH libraries, if you can not publish the source code.
My bottom line is: I do not consider it good style to publically blaim a
product for bad performance without having checked back with the people
behind this product, and being a consultant for another product at the
| _ RWTH| Joachim Worringen
|_|_`_ | Lehrstuhl fuer Betriebssysteme, RWTH Aachen
| |_)(_`| http://www.lfbs.rwth-aachen.de/~joachim
|_)._)| fon: ++49-241-80.27609 fax: ++49-241-80.22339
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf