top500 list (was: opteron VS Itanium 2)

Mark Hahn hahn at physics.mcmaster.ca
Mon Nov 17 12:37:37 EST 2003


> > > Lightning (LANL Opteron)    (2816 CPUs) -  8.1 TFlops
> > 
> > yield is 72%.
> 
> BTW, 60% of 8 GF == 4.8 GF per processor.  72% of 4 GF == 2.88.  If you
> use LINPACK as a metric, why do you think the latter wins?

because rmax/rpeak as being a sort of "balance-like" measure.
it's also scale-invariant, to the first order at least.

within the same category of hardware (say, desktop microprocessors and a 
premium but off-the-shelf interconnect), rmax/rpeak is interesting, 
since $/cpu are very roughly comparable.

> > for the billionth time: rmax is just a matter of how much money
> > you have.  rmax/rpeak is the only part of top500 that matters.
> 
> You have to include cost.

I assume that if top500 reported prices, they'd be fairly wonky.
for instance, in the DB domain, are $/TPC numbers all that useful?

> Or, put another
> way, a vendor would be better off building a slower processor with a
> modern memory system that achieved 95% of peak.

yes, you've just described a trad vector box.  looking at rmax/rpeak
is indeed a "vector super-ness" measure.

> You can always put more
> of them together with more money, right?

right, which is why I want to somehow regress scale out of the measure.

> (I'm not sure if I know of any
> networks that scale to 100,000 processors).  

grid ;)

> rmax/rpeak is just as bad (or worse) of a metric as rmax if it is the
> only metric.  It's not like LINPACk is terribly communication bound or
> anything (in which case, rmax/rpeak might mean something).  

I wish I had a 1K CPU cluster with gigabit, Myri, Quadrics, *and* IB ;)
squinting at top500, it looks like there is a fairly significant 
dependence of rmax/rpeak upon type of interconnect.  that's quite interesting.


_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list