Dell 1600SC + 82540EM poor performance..HELP NEEDED

Matthew_Wygant at dell.com Matthew_Wygant at dell.com
Fri Jul 25 07:31:23 EDT 2003


I would stick to intel, I would not use a Broadcom at all...  

-----Original Message-----
From: Stephane.Martin at imag.fr [mailto:Stephane.Martin at imag.fr] 
Sent: Friday, July 25, 2003 4:23 AM
To: Matthew_Wygant at exchange.dell.com
Cc: beowulf at beowulf.org
Subject: Re: Dell 1600SC + 82540EM poor performance..HELP NEEDED


Matthew_Wygant at Dell.com a écrit :
> 
> Desktop or server quality, I do not know, but the 1600sc does have the 
> 82540 chip, dmseg should show that much.  It is on a 33MHz bus and 
> does rate as a 10/100/1000 nic.  I was curious which driver you were 
> using, e1000 or eepro1000?  The latter has known slow transfer 
> problems, but just as mentioned, hard-setting all network devices 
> should yield the best performance.  Hope that helps.  1600sc servers 
> are not the best for clusters with their size and power consumption, 
> but I would recommend the 650 or 1650s.
> 
> -matt
> 
> -----Original Message-----
> From: Stephane.Martin at imag.fr [mailto:Stephane.Martin at imag.fr]
> Sent: Thursday, July 24, 2003 4:52 PM
> To: Jim Phillips
> Cc: boewulf
> Subject: Re: Dell 1600SC + 82540EM poor performance..HELP NEEDED
> 
> Jim Phillips a écrit :
> >
> > Hi,
> >
> > The 82540EM is a low-cost 32-bit "desktop" NIC, so it's hard to get 
> > full gigabit bandwidth, particularly if you're running at 33 MHz 
> > (look at /proc/net/PRO_LAN_Adapters/eth0/PCI_Bus_Speed to find out).  
> > There are no 82540EM-based PCI-X cards, AFAIK; are you sure it 
> > wasn't a 64-bit 82545EM card?  Intel distinguishes their 32-bit 
> > 33/66 MHz PCI PRO/1000 MT Desktop cards that use 82540EM from their 
> > 64-bit PCI-X PRO/1000 MT Server cards that use the 82545EM (and have 
> > full gigabit performance).
> >
> > -Jim
> >
> > On Thu, 24 Jul 2003 Stephane.Martin at imag.fr wrote:
> >
> > > > > Hello,
> > > > >
> > > > > We have recently received 48 Bi-xeon Dell 1600SC and we are 
> > > > > performing some benchmarks to tests the cluster. Unfortunately 
> > > > > we have very bad perfomance with the internal gigabit card 
> > > > > (82540EM chipset). We have passed linux netperf test and we 
> > > > > have only 33 Mo
> > > > >
> > > > > between 2 machines. We have changed the drivers for the last 
> > > > > ones, installed procfgd and so on... Finally we had Win2000 
> > > > > installed and the last driver
> > > > >
> > > > > from intel installed : the results are identical... To go 
> > > > > further we have installed a PCI-X 82540EM card and re-run the 
> > > > > tests : in that way the
> > > > >
> > > > > results are much better : 66 Mo full duplex...
> > > > > So the question is : is there a well known problem with this 
> > > > > DELL 1600SC concernig the 82540EM integration on the 
> > > > > motherboard ????
> > > > >
> > > > > As anyone already have (heard about) this problem ? Is there 
> > > > > any solution ?
> > > > >
> > > > > thx for your help
> > > > >
> > > >
> > > > --
> > > > Dr. Jeff Layton
> > > > Chart Monkey - Aerodynamics and CFD
> > > > Lockheed-Martin Aeronautical Company - Marietta
> > >
> > > Hello,
> > >
> > > For our tests we are connected to a 4108GL (J4865A), we have done 
> > > all necessary checks (maybe we've have forget something very very 
> > > big ????) to ensure the validity of our mesures. The ports have 
> > > been tested with auto neg on, then off and also forced. We have 
> > > also the same mesures when connected to a J4898A. The negociation 
> > > between the NIcs ans the two switches is working.
> > >
> > > When using a tyan motherboard with the 82540EM built-in and using 
> > > the same benchs and switches ans the same procedures (drivers 
> > > updates and compilations from Intel, various benchs, different OS) 
> > > the results are correct (80 to 90Mo).
> > >
> > > All our tests tends to show that dell missed something in the 
> > > integration of the 82540EM in the 1600SC series...if not we'll 
> > > really really appreciate to know what we are missing there cause 
> > > here we have a 150 000 dollars cluster said to be connected with a 
> > > network gigabit having network perfs of three 100 card bonded (in 
> > > full duplex it's even worse !!!!!). If the problem is not rapidly 
> > > solved the 48 machines will be returned....
> > >
> > > thx a lot for your concern,
> > >
> > > regards
> > >
> > >
> > > --
> > > Stephane Martin         Stephane.Martin at imag.fr
> > > http://icluster.imag.fr
> > > Tel: 04 76 61 20 31
> > > Informatique et distribution Web:  http://www-id.imag.fr ENSIMAG - 
> > > Antenne de Montbonnot ZIRST - 51, avenue Jean Kuntzmann 38330 
> > > MONTBONNOT SAINT MARTIN 
> > > _______________________________________________
> > > Beowulf mailing list, Beowulf at beowulf.org
> > > To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
> > >
> 
> I'm going to re re re re check it...
> 
> thx a lot for your concern !
> 
> --
> Stephane Martin         Stephane.Martin at imag.fr
> http://icluster.imag.fr
> Tel: 04 76 61 20 31
> Informatique et distribution Web:  http://www-id.imag.fr ENSIMAG - 
> Antenne de Montbonnot ZIRST - 51, avenue Jean Kuntzmann
> 38330 MONTBONNOT SAINT MARTIN
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf

hello,

The driver used is the e1000 one; last src from intel...
We are on the way of a commercial issue to get "not on board" good gb NICs
at low low cost... Which one is the best ? (broadcom ? intel ? other ?) I've
check (by myself this time ;) the ID of the PCI card added : YOU ARE RIGHT
it's 82545EM : our fault !!! good news ! BUT, I've also re checked the
number on the tyan motherboard and this this time it's really a 82540EM !
bad news ! So the pb is still there : why on a tyan mb we get twice the
perfs in comparaison with a dell mb ? (same os install, same bench, same
network) BTW we are going to get a card on the 64 bit PCI-X bus as the
onbaord is not suitable for high performance usage.

thx all for your concerns.

regards

-- 
Stephane Martin         Stephane.Martin at imag.fr
http://icluster.imag.fr 
Tel: 04 76 61 20 31   
Informatique et distribution Web:  http://www-id.imag.fr ENSIMAG - Antenne
de Montbonnot 
ZIRST - 51, avenue Jean Kuntzmann
38330 MONTBONNOT SAINT MARTIN

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list