[Beowulf] interconnect and compiler ?
hbugge at platform.com
Fri Jan 30 18:30:12 EST 2009
On Jan 30, 2009, at 23:57 , Greg Lindahl wrote:
> Well, like any experiment in this space, this isn't quite
> head-to-head. It could easily be the case that Platform MPI has some
> better collectives or other implemtation details which provide a
> performance boost outside of the RDMA vs. message passing issue. I
> thought you had a PSM implementation of Platform MPI?
Yes, we do.
> In which case,
> perhaps you could compare it against the QLogic MPI on the same
> hardware, to see what these other effects might be.
I agree. We would love to do a real head-to-head. The use of
Collective operations is not the answer though, but I don't have the
split between pnt2pnt and collectives for the 13 apps on the top of my
head. I should get back to the list with a correlation of the
performance advantage of PMPI vs. use of collectives.
> It would also be interesting to compare Platform MPI against
> MVAPICH on the same hardware -- do you have those numbers?
Well, no-one submits with MVAPICH. From the SPEC MPI2007 web site, you
will find PMPI performing better that Intel MPI, HP-MPI, MPT (SGI MPI)
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf