4-port NICs and channel bonding
siegert at sfu.ca
Tue Jan 23 15:08:05 EST 2001
let's assume you've decided that Myrinet is too expensive for your
cluster. Hence channel bonding is the (cheap) solution, if you want to get
more than 100Mb/s performance. Furthermore, you would like to use those
slim 2U or even 1U cases. However, you can have at best 3 PCI slots with
2U cases or 1 PCI slot with 1U cases.
There seem to be two solutions to this problem: Either you buy a motherboard
that has onboard video and two on board ethernet adapters (e.g., Supermicro
370DLR) or you buy a 4-port ethernet card.
The former solution forces you to buy a fairly expensive (compared to the
370DLE) motherboard (for 1U cases you don't have a choice, because the
370DLE won't fit. But nevertheless, I have no use for, e.g., the onboard SCSI).
The latter solution brings up the question how good are 4-port ethernet
I have tested the D-Link DFE570Tx 4-port NIC that has a DEC Digital DS21143
chip and uses the tulip driver. My test platform consists of two PIII/600MHz
PCs with RH 6.2, 2.2.16 kernel, bonding.o from the 2.2.17 kernel.
I used ttcp, netperf, and netpipe to test the performance. Here I quote
just the netpipe results for NPtcp, NPmpi compiled with mpich-1.2.1, and
NPmpi compiled with mpipro-1.5b7-3t. Results for 3 channel bonded
3Com 905B NICs are included for comparison. Latencies (lat) are in
microseconds, bandwidth (bw) in Mb/s.
| NPtcp | NPmpi/mpich | NPmpi/mpipro |
| lat | bw | lat | bw | lat | bw |
3 x 3C905B | 45 | 268 | 92 | 214 | 97 | 264 |
DFE570Tx (3 ports) | 43 | 250 | 94 | 199 | 113 | 241 |
DFE570Tx (4 ports) | 43 | 215 | 93 | 173 | 117 | 211 |
The full results of the tests can be found at
Obviously the performance for channel bonding 4 ports on the DFE570Tx
is worse than the performance when using just 3 ports.
This is an effect of the CPU load: A 600MHz PIII is not powerful enough
to receive 100Mb/s through 4 NICs at the same time. This becomes clear
when you use ttcp over udp (the sender pushes out packets as fast as
it can, the receiver has to do interrupt processing): the sender throughput
is 341 Mb/s, the receiver throughput is 89Mb/s; the rest is dropped on the
floor (you don't want to use NFS in such a configuration).
Unfortunately, the DFE570Tx with 3 channel bonded ports does not perform
as well as 3 905B's channel bonded (which is disappointing particularly
as 3 905B's are cheaper than a DFE570Tx at least in Canada). The reason is
that the 905B chip/3c59x driver combination has a better performance than
the DFE570Tx/tulip driver combination. My understanding is that the 905B
has an onboard processor (similar to the Intel EEPro100) that can offload
some of the tasks from the cpu resulting in a lower cpu load.
1. For the tulip driver experts: are there any driver flags that I could
try to get a better performance?
2. I am concerned about udp performance with respect to NFS reliability.
This may not be relevant anymore since the 2.4.0 kernel does not
list NFS-v3 as experimental anymore. Hence I could use NFS over tcp
and could also use a larger buffer size so that the performance may
remain the same (or even be better?). Has anybody tried this already?
3. The Matrox NS-FNIC/4 is a 4-port NIC that uses the Intel chip.
Has anybody tried it? The crucial disadvantage is its price: it's
more than twice as expensive as the DFE570Tx ...
4. Has anybody found a solution for getting 3 NICs (assuming the motherboard
has one onboard ethernet adapter, which I need for NFS connections)
plus a video adapter into a 2U case?
Academic Computing Services phone: (604) 291-4691
Simon Fraser University fax: (604) 291-4242
Burnaby, British Columbia email: siegert at sfu.ca
Canada V5A 1S6
Beowulf mailing list
Beowulf at beowulf.org
More information about the Beowulf