[Beowulf] 10GbE topologies for small-ish clusters?

Greg Lindahl lindahl at pbm.com
Wed Oct 12 14:04:27 EDT 2011


We just bought a couple of 64-port 10g switches from Blade, for the
middle of our networking infrastructure. They were the winner over all
the others, lowest price and appropriate features. We also bought
Blade top-of-rack switches. Now that they've been bought up by IBM you
have to negotiate harder to get that low price, but you can still get
it by threatening them with competing quotes.

Gnodal looks very interesting for larger, multi-switch clusters, they
were just a bit late to market for us. Arista really believes that
their high prices are justified; we didn't.

And if anyone would like to buy some used Mellanox 48-port 10ge
switches, we have 2 extras we'd like to sell.

-- greg

On Wed, Oct 12, 2011 at 10:52:13AM -0400, Chris Dagdigian wrote:
> 
> First time I'm seriously pondering bringing 10GbE straight to compute 
> nodes ...
> 
> For 64 servers (32 to a cabinet) and an HPC system that spans two racks 
> what would be the common 10 Gig networking topology be today?
> 
> - One large core switch?
> - 48 port top-of-rack switches with trunking?
> - Something else?
> 
> Regards,
> Chris
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



More information about the Beowulf mailing list