interconnect latency, disected.
Michael T. Prinkey
mprinkey at aeolusresearch.com
Sun Jun 29 18:59:06 EDT 2003
There is at least two MPI Ethernet implementations that try to use transfer
layers other than TCP:
Both claim ~10-13 usec latencies for older Gbit NICs.
MVICH seems to be dead. Gamma is still not SMP safe. Both are pretty short
on hardware support. AFAIK, neither support either of the Gbit NICs I have
been using (tg3 and e1000).
I recently tested Netgear GA622Ts end to end with netpipe. Latencies were
roughly 22 usecs for packets <100 bytes, bandwidth was respectable (540 Mbps
for 8k packets, 850 Mbps for ~100k packets). Latencies for Intel e1000
though a cheap 16-port switch were ~63 usecs. Bandwidth was lower across
the board through the switch.
Although I haven't done thorough testing, my best guess is that most of the
latency is coming from the switch, so TCP vs UDP vs raw ethernet frames
might be moot.
> does anyone have references handy for recent work on interconnect
> specifically, I'm noticing two things:
> - many of the papers I see that contrast IB with GBE seem to
> claim that GBE latency is much larger than I measure (20-30 us).
> - I haven't found any recent studies on where even 25 us of
> GBE latency comes from. I recall a really great study from years ago
> that broke a single tx/rx down into an explicit timeline,
> but that was for ~100 MIP CPUs and 100bT, I think.
> - are there current/active efforts to use something other than
> TCP for implementing the usual MPI primitives? I'd love to see
> something that used ethernet's broad/multicast support, for instance.
> thanks, mark.
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf