very high bandwidth, low latency manner?

Markus Fischer markus at
Mon Apr 15 05:40:49 EDT 2002

Steffen Persvold wrote:
> Now we have price comparisons for the interconnects (SCI,Myrinet and
> Quadrics). What about performance ? Does anyone have NAS/PMB numbers for
> ~144 node Myrinet/Quadrics clusters (I can provide some numbers from a 132
> node Athlon 760MP based SCI cluster, and I guess also a 81 node PIII ServerWorks
> HE-SL based cluster).

yes, please.

I would like to get/see some numbers.
I have run tests with SCI for a non linear diffusion algorithm on a 96 node
cluster with 32/33 interface. I thought that the poor
scalability was due to the older interface, so I switched to
a SCI system with 32 nodes and 64/66 interface.

Still, the speedup values were behaving like a dog with more than 8 nodes.

Especially, the startup time will reach minutes which is probably due to
the exporting and mapping of memory.

Yes, the MPI library used was Scampi. Thus, I think the
(marketing) numbers you provide
below are not relevant except for applying for more VC.

Even worse, we noticed, that the SCI ring structure has an impact on the 
communication pattern/performance of other applications. 
This means we only got the same execution time if other nodes were
I idle or did not have communication intensive applications.
How will you determine the performance of the algorithm you just invented
in such a case ?

We then used a 512 node cluster with Myrinet2000. The algorithm scaled
very fine up to 512 nodes.


> Regards,
> --
>   Steffen Persvold   | Scalable Linux Systems |   Try out the world's best
>  mailto:sp at |  | performing MPI implementation:
> Tel: (+47) 2262 8950 |   Olaf Helsets vei 6   |      - ScaMPI 1.13.8 -
> Fax: (+47) 2262 8951 |   N0621 Oslo, NORWAY   | >320MBytes/s and <4uS latency
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or unsubscribe) visit
Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list