Intel PRO/1000CT Gigabit ethernet with CSA

Daniel Pfenniger daniel.pfenniger at
Fri Jun 27 02:54:33 EDT 2003


For a small experimental cluster (24 dual Xeon nodes)
we decided to use InfiniBand technology, which from specs is
4 times faster (8Gb/s), 1.5 lower latency (~5musec) than
Myrinet for approximately the same cost/port.

Our estimate is that with current 3GHz class processors
one needs larger bandwith and lower latency than 2xGigaEthernet
in order to make efficient use of the rest of the hardware,
this for typical MPI type runs.

Of course there exist always sufficiently coarse grained parallel
applications for which GigaEthernet would be the good choice.

At the moment there are very few >32 port fat-tree switches
for InfiniBand, but 96-128 port swicthes should be available
in the next months.


Roland Stumpf wrote:
> Hi everybody !
> We are configuring a ~40 node cluster for parallel materials modeling
> applications (e.g. VASP code from Vienna). My guess is that the sweet
> spot in terms of communication speed versus cost is with gigabit
> ethernet. Does anybody have an opinion on the new Intel PRO/1000CT
> Gigabit NIC that travels through the CSA (Communications
> Streaming Architecture) bus ? The Intel fact sheet is, as often, short
> on quantitative detail but it looks promising:
> How does this NIC and a switch compare to Myrinet and similar networks ?
> We would be especially interested in an improvement of the latency for
> MPI communication over other Gigabit or 100Mbit ethernet. Another
> concern is if the faster GigE cards could saturate a fully connected
> modern 24 port switch like the Dell PowerConnect 5224, leading to
> dropped packages and crashing jobs.
> Thanks,
> Roland

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list