[Beowulf] Q: IB message rate & large core counts (per node)?

Greg Lindahl lindahl at pbm.com
Fri Feb 19 16:57:30 EST 2010

On Fri, Feb 19, 2010 at 01:25:07PM -0500, Brian Dobbins wrote:

>   I know Qlogic has made a big deal about the InfiniPath adapter's extremely
> good message rate in the past... is this still an important issue?

Yes, for many codes. If I recall stuff I published a while ago, WRF
sent a surprising number of short messages. But really, the right
approach for you is to do some benchmarking. Arguing about
microbenchmarks is pointless; they only give you clues that help
explain your real application results. I believe that both QLogic and
Mellanox have test clusters you can borrow.

Tom Elken ought to have some WRF data he can share with you, showing
message sizes as a function of cluster size for one of the usual WRF
benchmark datasets.

>   On a similar note, does a dual-port card provide an increase in on-card
> processing, or 'just' another link?  (The increased bandwidth is certainly
> nice, even in a flat switched network, I'm sure!)

Published microbenchmarks in for Mellanox parts the SDR/DDR generation
showed that only large messages got a benefit. I've never seen any
application benchmarks comparing 1 and 2 port cards.

-- greg
(formerly the system architect of InfiniPath's SDR and DDR generations)
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list