[Beowulf] Infiniband and multi-cpu configuration

Daniel Fernandez daniel at cttc.upc.edu
Fri Feb 8 11:30:14 EST 2008


Hi beowulf users,

We'll move our GigE structure to an Infiniband 4X DDR one ( prices have
dropped quite a bit ). Also we'll build on AMD Opteron up to 4 or 8
cores. 

In case of 8 cores:

	A 4 socket dual-core solution *must* scale better than a 2 socket
quad-core one, that is talking about memory bandwith ( nearly double ).
On the other hand, the Hypertransport links on Opteron 2000/8000 series
theorically rated at a 8 GB/s per link, so that would be as equal as 4X
SDR Infiniband...

	A configuration like:

		 2 PCs with 2 socket and 2 dual-core Opterons linked together with
Infiniband 4X DDR ( 8 cores )

	Should perform as:

		 1 PC with 4 socket ( dual-core ) Opteron based.

	Saving cost on Infiniband hardware.

	When maximizing cores per node, reducing network connections and
network protocol overhead and considering Opteron memory architecture...
is 8 ( 4 sockets * 2 cores ) an adequate number or a 4 ( 2 sockets * 2
cores ) is better?

Also onboard memory Inifiniband HCAs must perform better than
memory-less ones, that is... but how much? any real numbers out there?

Thanks in advance.
	
---
Daniel Fernandez <daniel at cttc.upc.edu>
Heat and Mass Transfer Center - CTTC
www.cttc.upc.edu
UPC Campus Industrials , ETSEIAT , TR4 Building

	


-- 
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que està net.
For all your IT requirements visit: http://www.transtec.co.uk

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:47ac85ef157512889862676!



More information about the Beowulf mailing list