[Beowulf] EM64T Clusters

Don Holmgren djholm at fnal.gov
Tue Jul 27 18:36:12 EDT 2004


On Fri, 16 Jul 2004, Mark Hahn wrote:

> > The way Intel rolls out new chips is that ISVs, resellers, and the
> > press get a few systems before they go on sale. Then volume production
> > finally rolls around. I don't think we're quite there yet.
>
> can anyone offer hints of pci-express performance?  for instance,
> boards like:
>
> http://www.tyan.com/products/html/tomcati915.html
>
> have a x16 PCI-e slot, which afaikt, would be a perfect place to put
> a low-latency cluster interconnect board.  I haven't heard Quadrics
> or Myri talking about their roadmap for PCI-e, but IB people seem to
> think it'll give them the throne.  hard to see why, since it'll
> help their competitors as well.  come to think of it, if IB people
> claim PCI-e will shave over 1 us off their latency, what will PCI-e
> do for Quadrics (already under 2 us!)?
>
> thanks, mark hahn.


We've just brought up a test stand with both PCI-X and PCI-E Infiniband
host channel adapters.  Some very preliminary (and sketchy, sorry) test
results which will be updated occassionally are available at:

   http://lqcd.fnal.gov/benchmarks/newib/


The PCI Express nodes are based on Abit AA8 motherboards, which have x16
slots.  We used the OpenIB drivers, as supplied by Mellanox in their
"HPC Gold" package, with Mellanox Infinihost III Ex HCA's.

The PCI-X nodes are a bit dated, but still capable.  They are based on
SuperMicro P4DPE motherboards, which use the E7500 chipset.  We used
Topspin HCA's on these systems, with either the supplied drivers or the
OpenIB drivers.

I've posted NetPipe graphs (MPI, rdma, and IPoIB) and Pallas MPI
benchmark results.  MPI latencies for the PCI Express systems were about
4.5 microseconds; for the PCI-X systems, the figure was 7.3
microseconds.  With Pallas, sendrecv() bandwidths peaked at
approximately 1120 MB/sec on the PCI Express nodes, and about 620 MB/sec
on the PCI-X nodes.

I don't have benchmarks for our application posted yet but will do so
once we add another pair of PCI-E nodes.

Don Holmgren
Fermilab
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list