[Beowulf] G5 cluster for testing
bill at cse.ucdavis.edu
Thu Feb 26 06:55:22 EST 2004
On Wed, Feb 25, 2004 at 01:03:02PM +0000, Ashley Pittman wrote:
> There is a third issue here which you've missed which is that
> interconnect performance can depends on the PCI bridge that it's plugged
> into. It would be more correct to say that performance is predictable
> by dual-node performance and scaling of the interconnect. Of course
> this may not make a difference for Ethernet or even gig-e but it does
> matter at the high end.
Take this chart for instance:
On any decent size cluster the node performance or interconnect
performance is likely to be significantly larger effects on cluster
performance then any of the differences on that chart.
Or maybe your talking about sticking $1200 Myrinet cards in a
133 MB/sec PCI slot?
Don't forget peak bandwidth measurements assume huge (10000-64000 byte
packets), latency tolerance, and zero computation. Not exactly the use
I'd expect in a typical production cluster.
So my suggestion is:
#1 Pick your application(s), this is why your buying a cluster right?
#2 For compatible nodes pick the node with the best perf
#3 For compatible interconnects pick the one with the best
scaling or price/scaling for the number of nodes you can afford/fit.
#3 If you get a choice of PCI-X bridges, sure consult the URL above
and pick the fastest one.
Computational Science and Engineering
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf