[Beowulf] balance between compute and communicate

Lux, Jim (337C) james.p.lux at jpl.nasa.gov
Tue Jan 24 13:13:17 EST 2012

One of the lines in the article Eugen posted:

"There's also a limit to how wimpy your cores can be. Google's infrastructure guru, Urs Hölzle, published an influential paper on the subject in 2010. He argued that in most cases brawny cores beat wimpy cores. To be effective, he argued, wimpy cores need to be no less than half the power of higher-end x86 cores."

Is interesting.. I think the real issue is one of "system engineering".. you want processor speed, memory size/bandwidth, and internode communication speed/bandwidth to be "balanced".  Super duper 10GHz cores with 1k of RAM  interconnected with 9600bps serial links is clearly an unbalanced system..

The paper is at


>From the paper:
Typically, CPU power decreases by approximately O(k2) when CPU frequency decreases by k,

Hmm.. this isn't necessarily true, with modern designs.  In the bad old days, when core voltages were high and switching losses dominated, yes, this is the case, but with modern designs, the leakage losses are starting to be comparable to the switching losses.  But that's ok, because he never comes back to the power issue again, and heads off on Amdahl's law (which we 'wulfers all know) and the inevitable single thread bottleneck that exists at some point.

However, I certainly agree with him  when he says:
Cost numbers used by wimpy-core evangelists always exclude software development costs. Unfortunately, wimpy-core systems can require applications to be explicitly parallelized or otherwise optimized for acceptable performance....
But, I don't go for
Software development costs often dominate a company's overall technical expenses

I don't know that software development costs dominate.  If you're building a million computer data center (distributed geographically, perhaps), that's on the order of several billion dollars, and you can buy an awful lot of skilled developer time for a billion dollars.  It might cost another billion to manage all of them, but that's still an awful lot of development.  But maybe in his space, the development time is more costly than the hardware purchase and operating costs.

He summarizes with
Once a chip's single-core performance lags by more than a factor to two or so behind the higher end of current-generation commodity
processors, making.....

Which is essentially my system engineering balancing argument, in the context of expectations that the surrounding stuff is current generation.

So the real Computer Engineering question is: Is there some basic rule of thumb that one can use to determine appropriate balance, given things like speeds/bandwidth/power consumption?

Could we, for instance, take moderately well understood implications and forecasts of future performance (e.g. Moore's law and its ilk) and predict what size machines with what performance would be reasonable in say, 20 years?  The scaling rules for CPUs, for Memory, and for Communications are fairly well understood.

(or maybe this is something that's covered in every lower division computer engineering class these days?.. I confess I'm woefully ignorant of what they teach at various levels these days)

This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.clustermonkey.net/pipermail/beowulf/attachments/20120124/9d4a6fc8/attachment-0001.html>
-------------- next part --------------
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list