[Beowulf] Performance characterising a HPC application

Gilad Shainer Shainer at mellanox.com
Mon Mar 26 12:38:43 EDT 2007


> 
> The next slide shows a graph of the LS-Dyna results recently 
> submitted to topcrunch.org, showing that InfiniPath SDR beats 
> Mellanox DDR on the neon_refined_revised problem, both 
> running on 3.0 Ghz Woodcrest dual/dual nodes.


This is yet another example of "fair" comparison. Unlike Qlogic,
Mellanox
offer a family of products for PCIe servers, and there are multiple MPI
versions that support those products. The performance depends on the
hardware
you pick and the software you use. Why don't you look at 
http://www.clustermonkey.net//content/view/178/33/? 

This shows that Mellanox SDR beats Qlogic, even on a latency sensitive
applications, and that was before ConnectX.

As for the overhead portion, this paper does not compare hardware to
hardware 
overhead, and it is greatly influenced by the MPI software
implementation.
But who cares what exactly did they measured, right?..... Anyway, it is
very
reasonable to believe that On-loading architecture has lower CPU
overhead than
Off-loading one... 


> 
> I look forward to showing the same advantage vs ConnectX, 
> whenever it's actually available.
> 


Thanks for the marketing promotion. ConnectX InfiniBand adapters are 
here - http://www.clustermonkey.net//content/view/191/1/. This new
family of adapters provides 1.2us MPI latency, and full hardware
transport
Offload among other great features. 
 


> -- greg
> 
> (p.s. ggv has issues with the fonts in this pdf, so try xpdf 
> or (yech) acroread instead.)
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org To change your 
> subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
> 

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:460d8582102071804284693!



More information about the Beowulf mailing list