[Beowulf] Gigabit switches for Channel Bonding
Ramiro Alba Queipo
raq at cttc.upc.edu
Tue Apr 24 08:34:42 EDT 2007
We have a cluster of 150 nodes, from which 48 are Dual core with 2
ethernet cards (24 of them are about to be purchased) and 24 are Single
core with 2 Gigabit ethernet cards also. The rest are Single Core
with a single Fast ethernet card.
We intend to setup a Channel Bonding Ethernet through Linux kernel with
the 48 Dual core nodes,so we need two buy 48 port stackable gigabit
switches, and should also be stacked (not an essential condition) to a
48 ports gigabit switch (HP Procurve 2848) we already have. The rest of
the nodes go to a Cisco Fast Ethernet switch.
The options we are evaluated (both are stackable) are:
- 3Com Switch 5500G, which can be stacked by buying modules with 1 port
8 ports 1G. They say 48 Gbps stacking bandwith (96 Gbs full duplex)??
- HP Procurve 2900-48G. 10 Gb stacking bandwith (CX4 rear port)?
We are using the cluster to run MPI based applications, so it is very
important for us performance (latency) and stability at communications.
Which solution do advise?
Thanks in advance
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que està net.
For all your IT requirements visit: http://www.transtec.co.uk
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf