[Beowulf] 96 Processors Under Your Desktop (fwd from brian-slashdotnews at hyperreal.org)

Mark Hahn hahn at physics.mcmaster.ca
Mon Aug 30 18:14:01 EDT 2004


> Transmeta 2) This is not shared memory setup, but ethernet connected. So

yeah, just gigabit.  that surprised me a bit, since I'd expect a trendy
product like this to want to be buzzword-compliant with IB.

> Does anyone have any idea haw the Efficeon's stack up against Opterons?

the numbers they give are 3Gflops (peak/theoretical) per CPU.
that's versus 4.8 for an opteron x50, or 10 gflops for a ppc970/2.5.
they mention 150 Gflops via linpack, which is about right, given
a 50% linpack "yield" as expected from a gigabit network.

remember that memory capacity and bandwidth are also low for a typical
HPC cluster.  perhaps cache-friendly things like sequence-oriented bio stuff
would find this attractive, or montecarlo stuff that uses small models.

> A quad cpu opteron comes in at a similar price as Orion's 12 cpu unit,
> but the opeteon is a faster chips and has shared mem. The Orion DT-12
> lists a 16 Gflop linpack. Does anyone have quad Opteron linpack results?

for a fast-net cluster, linpack=.65*peak.  for vector machines, it's closer 
to 1.0; for gigabit .5 is not bad.  for a quad, I'd expect a yield better 
than a cluster, but not nearly as good as a vector-super.  guess .8*2.4*2*4=
.8*2.4*2*4=15 Gflops.

(the transmeta chip apparently does 2 flops/cycle like p4/k8, unlike 
the 4/cycle for ia64 and ppc.)

I think the main appeal of this machine is tidiness/integration/support.
I don't see any justification for putting one beside your desk - 
are there *any* desktop<=>cluster apps that need more than a single 
gigabit link?

for comparison, 18 Xserves would deliver the same gflops, dissipate
2-3x as much power, take up about twice the space.

personally, I think more chicks would dig a stack of Xserves ;)

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list