[Beowulf] evaluating FLOPS capacity of our cluster
tom.elken at qlogic.com
Mon May 11 14:00:40 EDT 2009
> On Behalf Of Rahul Nabar
> On Mon, May 11, 2009 at 12:23 PM, Gus Correa <gus at ldeo.columbia.edu>
> > Theoretical maximum Gflops (Rpeak in Top500 parlance), for instance,
> > on cluster with AMD quad-core 2.3GHz processor is:
> > 2.3 GHz x
> > 4 floating point operations/cycle x
> > 4 cores/CPU socket x
> > number of CPU sockets per node x
> > number of nodes.
> Excellent. Thanks Gus. That sort of estimate is exactly what I needed.
> I do have AMD Athelons.
AMD quad-core Opteron processors (code-named Barcelona and Shanghai for servers) were the first to have 4 FLOPs/cycle. Earlier Opterons and Athlon64's had 2 FLOPS/cycle.
I don't know their current desktop processors as well, but I would guess that 3- or 4-core Phenoms also have a peak of 4 flops/cycle. I am not sure about current Athlons. But you may have to do more searching or give the list your precise Athlon (or Athlon64) model name and number to find out the actual # of FLOPS/cycle for your cluster. IIRC, Athlon before Athlon64 had 1 FLOP/cycle -- though there have been a lot of branding and re-branding for AMD/Intel desktop CPUs over the years.
> In fact, this is super usefule for some of our oldest legacy hardware
> too. We used to just use Dell Desktops clustered together. I have
> easily accessible all the other info. that goes into your equation.
> Except the floating point operations / cycle numbers.
> Let me dig those out.
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> To change your subscription (digest mode or unsubscribe) visit
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf