[Beowulf] NAMD/CUDA scaling: QDR Infiniband sufficient?

Igor Kozin i.n.kozin at googlemail.com
Mon Feb 9 17:04:19 EST 2009


are the slides of this presentation available?

2009/2/9 Dow Hurst DPHURST <DPHURST at uncg.edu>

> Has anyone tested scaling of NAMD/CUDA over QLogic or ConnectX QDR
> interconnects for a large number of IB cards and GPUs?  I've listened to
> John Stone's presentation on VMD and NAMD CUDA acceleration.  The consensus
> I brought away from the presentation was that one QDR per GPU would probably
> be necessary to scale efficiently.  The 60 node, 60 GPU, DDR IB enabled
> cluster that was used for initial testing was saturating the interconnect.
> Later tests on the new GT200 based cards show even more performance gains
> for the GPUs.  1 GPU performing the work of 12 CPUs or 8 CPUs equaling 96
> cores were the numbers I saw.  So with a ratio of 1gpu/12cores, interconnect
> performance will be very important.
> Thanks,
> Dow
> __________________________________
> Dow P. Hurst, Research Scientist
> Department of Chemistry and Biochemistry
> University of North Carolina at Greensboro
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.clustermonkey.net/pipermail/beowulf/attachments/20090209/2f19b8ac/attachment-0001.html>
-------------- next part --------------
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


More information about the Beowulf mailing list