[Beowulf] e1000 performance
Trent Piepho
xyzzy at speakeasy.org
Mon Mar 8 16:59:59 EST 2004
On Mon, 8 Mar 2004, Michael T. Prinkey wrote:
> I am building a small cluster that uses Tyan S2723GNN motherboards that
> include an integrated Intel e1000 gigabit NIC. I have installed two
>From a supermicro X5DPL-iGM (E7501 chipset) with onboard e1000 to supermicro
E7500 board with an e1000 PCI-X gigabit card, via a dell 5224 switch. The
E7501 board has a 3ware 8506 card on the same PCI-X bus as the e1000 chip, so
it's running at 64/66. The PCI-X card is running at 133 MHz.
TCP STREAM TEST to duet
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
131070 131070 1472 9.99 940.86
Kernel versions are 2.4.20 (PCI-X card) and 2.4.22-pre2 (the onboard chip).
2.4.20 has driver 4.4.12-k1, while 2.4.22-pre2 has driver 5.1.11-k1.
The old e1000 driver has a very useful proc file in /proc/net/PRO_LAN_Adapters
that gives all kind of information. I have RX checksum on and flow control
turned on. The newer driver doesn't have this information.
> the NAPI e1000 driver in the 2.4.24 kernel. I have tried the following
NAPI?
> measures without any improvement:
I've done nothing wrt gigabit performance, other than turn on flow control. I
found that without flowcontrol, tcp connections to 100 mbit hosts would hang.
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list