[Beowulf] many cores and ib

Gilad Shainer Shainer at mellanox.com
Mon May 5 13:01:42 EDT 2008


>> Hello,
>> Just a small question, does anybody has experience with many core
>> (16) nodes and InfiniBand? Since we have some users that need
>> shared memory but also we want to build a normal cluster for
>> mpi apps, we think that this could be a solution. Let's say about
>> 8 machines (96 processors)  pus infiniband. Does it sound correct?
>> I'm aware of the bottleneck that means having one ib interface for 
>> the mpi cores, is there any possibility of bonding?
>
> Bonding (or multi-rail) does not make sense with "standard IB" in PCIe
x8 since the PCIe connection limits the transfer rate of a single
IB-Link already. 
>
> My hint would be to go for Infinipath from QLogic or the new ConnectX
from Mellanox since message rate is probably your limiting factor and
those technologies have a huge advantage over standard Infiniband
SDR/DDR. 
>
>
> Infinipath and ConnectX are available as DDR Infiniband and provide a
bandwidth of more than 1800 MB/s.
 
 
Boding can provide more bandwidth if needed. Each PCIe x8 slot can
provide (in average) around 1500MB/s, therefore using IB DDR (no need to
be ConnectX), you will get 1500MB/s uni-dir from each PCIe Gen1 x8 slot.
According to OSU benchmarks, InfiniHost III Ex provides >20M MPI message
per second. Of course moving to ConnectX enable you the option to use
servers with PCIe Gen2 slots, where each slot provide you around
3300MB/s with ConnectX IB QDR and 6500MB/s bi-directional BW. If you
will be using the DDR option with ConnectX, the BW will be little bit
higher than what Jan have mentioned, but this is in the ball park. 

Gilad.
 
 
 
 
 
 
 
 

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:481f410a78591543480883!



More information about the Beowulf mailing list