New cluster benchmark proposal (Re: top500 list)

Bill Broadley bill at
Mon Nov 17 16:36:28 EST 2003

After all this discussion of the top 500 list, it got me thinking about a
"better" benchmark.  Where "better" means more useful to evaluating my
idea of cluster goodness.  

So what is hard about large clusters?  Seems to me like it is primary
scaling.  What controls the scaling?  Mostly the interconnect.  So we
primarily need to evaluate the interconnect and how it performs in a
large cluster environment.

Additionally getting an account or even the hardware to evaluate single
cpu performance of a IT2, G5, P4, or Opteron is fairly easy and direct.
Of course there are characteristics inside the box that effect scaling
outside, but I'd argue these effects are much smaller then the effects
of the interconnect.

So what would a better benchmark look like?  Bisectional bandwidth
is of course interesting, although it's a fairly gross measure.  How
about something along the lines of:
*   Minimal CPU work, only enough to ensure correctness.
*	MPI based (focus on user visible performance)
*	Provide scores for sending messages 1,10,100,1000,10000 64 bit numbers
*	Have a random mode (any node can talk to any other)
*	Have a nearest neighbor mode (end user can define arbitrary mapping
    of virtual nodes to physical nodes for maximum performance.)
*   Run on 8, 16, ... 2^N nodes (for pretty scaling graphs)

For shared memory machines it's much tougher, I don't know of any portable
way to insure remote page allocation.  Maybe have each cpu allocate 512
MB arrays, access it for a million times, then swap pointers, start the
clock and measure the bandwidth per CPU to that memory (wherever it
was allocated).

Does anyone know of similar tools for doing this?  If not do people
think it would be worthwhile?  If so I'd be willing to take a shot at
writing the MPI version.  Anyone interested in a SC2003 BOF to discuss it?

Feedback?  Comments?

Bill Broadley
UC Davis
Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list