[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

Peter Kjellstrom cap at nsc.liu.se
Fri Apr 9 05:16:39 EDT 2010


On Thursday 08 April 2010, Greg Lindahl wrote:
> On Thu, Apr 08, 2010 at 04:13:21PM +0000, richard.walsh at comcast.net wrote:
> > What are the approaches and experiences of people interconnecting
> > clusters of more than128 compute nodes with QDR InfiniBand technology?
> > Are people directly connecting to chassis-sized switches? Using
> > multi-tiered approaches which combine 36-port leaf switches?
>
> I would expect everyone to use a chassis at that size, because it's cheaper
> than having more cables. That was true on day 1 with IB, the only question
> is "are the switch vendors charging too high of a price for big switches?"

Recently we've (swedish academic centre) got offers using 1U 36-port switches 
not chassis from both Voltaire and Qlogic reason given: lower cost. So from 
our point of view, yes, "switch vendors [are] charging too high of a price 
for big switches" :-)

One "pro" for many 1U switches compared to a chassi is that it gives you more 
topological flexibility. For example, you can build a 4:1 over subscribed 
fat-tree and that will obviously be cheaper than a chassi (even if they were 
more reasonably priced).

/Peter
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part.
URL: <http://www.clustermonkey.net/pipermail/beowulf/attachments/20100409/38855e26/attachment-0001.sig>
-------------- next part --------------
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


More information about the Beowulf mailing list