[Beowulf] 10GbE topologies for small-ish clusters?
Shainer at Mellanox.com
Wed Oct 12 12:30:02 EDT 2011
You can also check the Mellanox products - both for 40GigE and 10GigE switch fabric.
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Hearns, John
Sent: Wednesday, October 12, 2011 8:31 AM
To: dag at sonsorol.org; beowulf at beowulf.org
Subject: Re: [Beowulf] 10GbE topologies for small-ish clusters?
First time I'm seriously pondering bringing 10GbE straight to compute nodes ...
For 64 servers (32 to a cabinet) and an HPC system that spans two racks what would be the common 10 Gig networking topology be today?
- One large core switch?
- 48 port top-of-rack switches with trunking?
- Something else?
I was going to suggest two Gnodal rack top switches, linked by a 40Gbps link
I see though that their GS7200 switch has 72 x 10Gbps ports - should do you just fine!
The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf