[Beowulf] Q: IB message rate & large core counts (per node)?
lindahl at pbm.com
Fri Feb 19 17:17:21 EST 2010
On Fri, Feb 19, 2010 at 01:47:07PM -0500, Joe Landman wrote:
> The big issue will be contention for the resource.
What "the resource" is depends on implementation.
All network cards have the limit of the line rate of the network.
As far as I can tell, the Mellanox IB cards have a limited number of
engines that process messages. For short messages from a lot of CPUs,
they don't have enough. For long messages, they have plenty, & hit the
line rate. Don't storage systems typically send mostly long messages?
The InfiniPath (now True Scale) design uses a pipelined approach. You
can analytically compute the performance on short messages by knowing
2 numbers: the line rate, and the "dead time" between back-to-back
packets, which is determined by the length of the longest pipeline
stage. I was thrilled when we figured out that our performance graph
was exactly determined by that equation. And the pipeline is a
resource that you can't oversubscribe.
(formerly... yada yada)
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf