Global Shared Memory and SCI/Dolphin
Mikhail Kuzminsky
kus at free.net
Mon Jul 23 12:44:16 EDT 2012
According to James Cownie
>
> > > Because MPI is what most people want to achieve code- and
> > > peformance-portability.
>
> > Partially I may agree, partially - not: MPI is not the best in the
> > sense of portability (for example, optimiziation requires knowledge
> > of interconnect topology, which may vary from cluster to cluster,
> > and of course from MPP to MPP computer).
>
> MPI has specific support for this in Rolf Hempel's topology code,
> which is intended to allow you to have the system help you to choose a
> good mapping of your processes onto the processors in the system.
Unfortunately I do not know about that codes :-( but for the best optimization I'll re-build the algorithm itself to "fit" for target topology.
>
> This seems to me to be _more_ than you have in a portable way on the
> ccNUMA machines, where you have to worry about
>
> 1) where every page of data lives, not just how close each process is
> to another one (and you have more pages than processes/threads to
> worry about !)
>
> 2) the scheduler choosing to move your processes/threads around the
> machine.
Yes, but "by default" I beleive that they are the tasks of operating system,
or, as maximum, the information I'm supplying to OS, *after* translation
and linking of the program.
>
> > I think that if there is relative cheap and effective way to build
> > ccNUMA system from cluster - it may have success.
>
> Which is, of course, what SCI was _intended_ to be, and we saw how
> well that succeeded :-(
>
> -- Jim
> James Cownie <jcownie at etnus.com>
> Etnus, LLC. +44 117 9071438
> http://www.etnus.com
Mikhail Kuzminsky
Zelinsky Institute of Organic Chemsitry
Moscow
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list