opteron VS Itanium 2
hahn at physics.mcmaster.ca
Thu Oct 30 12:45:20 EST 2003
> this fact leads us back to the idea that cache >>is<< important for a suite
> of "representative codes".
yes, certainly, and TBBIYOC (*). but the traditional perhaps slightly
stodgy attitude towards this has been that caches do not help machine
balance. that is, it2 has a peak/theoretical 4flops/cycle, but since
that would require, worstcase, 3 doubles per flop, the highest-ranked
CPU is actually imbalanced by a factor of 22.5!
(*) the best benchmark is your own code
let's step back a bit. suppose we were designing a new version of SPEC,
and wanted to avoid every problem that the current benchmarks have.
here are some partially unworkable ideas:
keep geometric mean, but also quote a few other metrics that don't
hide as much interesting detail. for instance, show the variance of
scores. or perhaps show base/peak/trimmed (where the lowest and highest
component are simply dropped).
cache is a problem unless your code is actually a spec component,
or unless all machines have the same basic cache-to-working-set relation
for each component. alternative: run each component on a sweep of problem
sizes, and derive two scores: in-cache and out-cache. use both scores
as part of the overall summary statistic.
I'd love to see good data-mining tools for spec results. for instance,
I'd like to have an easy way to compare consecutive results for the same
machine as the vendor changed the compiler, or as clock increases.
there's a characteristic "shape" to spec results - which scores are
high and low relative to the other scores for a single machine. not only
does this include outliers (drastic cache or compiler effects), but
points at strengths/weaknesses of particular architectures. how to do this,
perhaps some kind of factor analysis?
regards, mark hahn.
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf