linpack/hpl

Shane Canon canon at nersc.gov
Tue Sep 25 11:22:41 EDT 2001


Greetings,

We are attempting to get a linpack number
for our cluster.  We have around
250 nodes (dual PIIIs) with an ethernet
interconnect (100baseT and 1000baseT).
We have tried running it on a subset of
machines and are getting disappointing
numbers.  We are adjusting the problem
size and block size primarily.  We
have also tried a variety of library
and compiler combinations (atlas/blas,
intel's blas,pgi,gcc).   The numbers
for under 10 nodes look reasonable,
but as we edge higher (>30) things
start to tank.  I had always understood
that linpack was fairly insensitive to
the interconnect.  How true is this?
I understand you can increase the blocksize
to limit communications, but this also
causes cache thrashing.  Right?  Are
there any other handles to turn in HPL?
Has anyone every written a linpack code
that would work good on this type of
architecture?

Thanks in advance,

--Shane Canon

-- 
------------------------------------------------------------------------
Shane Canon                             voice: 510-486-6981
National Energy Research Scientific     fax:   510-486-7520
  Computing Center                       
1 Cyclotron Road Mailstop 50D-106       
Berkeley, CA 94720                      canon at nersc.gov
------------------------------------------------------------------------



_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list