[Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes
rpnabar at gmail.com
Thu Sep 3 12:28:39 EDT 2009
On Thu, Sep 3, 2009 at 10:19 AM, Gus Correa<gus at ldeo.columbia.edu> wrote:
> See these small SDR switches:
> And SDR HCA card:
Thanks Gus! This info was very useful. A 24port switch is $2400 and
the card $125. Thus each compute node would be approximately $300 more
expensive. (How about infiniband cables? Are those special and how
expensive. I did google but was overwhelmed by the variety available.)
This isn't bad at all I think. If I base it on my curent node price
it would require only about a 20% performance boost to justify this
investment. I feel Infy could deliver that. When I had calculated it
the economics was totally off; maybe I had wrong figures.
The price-scaling seems tough though. Stacking 24 port switches might
get a bit too cumbersome for 300 servers. But when I look at
corresponding 48 or 96 port switches the per-port-price seems to shoot
up. Is that typical?
> For a 300-node cluster you need to consider
> optical fiber for the IB uplinks,
You mean compute-node-to-switch and switch-to-switch connections?
Again, any $$$ figures, ballpark?
> I don't know about your computational chemistry codes,
> but for climate/oceans/atmosphere (and probably for CFD)
> IB makes a real difference w.r.t. Gbit Ethernet.
I have a hunch (just a hunch) that the computational chemistry codes
we use haven't been optimized to get the full advantage of the latency
benefits etc. Some of the stuff they do is pretty bizarre and
inefficient if you look at their source codes (writing to large I/O
files all the time eg.) I know this ought to be fixed but there that
seems a problem for another day!
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf