[Beowulf] Re: newbies' dilemma / firewire? (Hahn) (Will)

Ed Karns edkarns at firewirestuff.com
Fri Mar 10 11:58:31 EST 2006


On Mar 9, 2006, at 7:32 PM, beowulf-request at beowulf.org wrote:

> Infiniband with DDR is already at 20Gbps over CX4 copper

" The 4X InfiniBand protocol extends the existing 1X protocol by  
supporting up to four 2.5Gb/sec dual-simplex connections for an  
effective duplex transmission speed of 10Gb/sec. ... " (From: http:// 
www.lecroy.com/tm/products/ProtocolAnalyzers/infiniband.asp?menuid=62 )

I did not mean to imply that this performance level was not possible  
or good or valuable  ... just not "cost effective" considering:  
energy budget (line length v. energy consumed), upper limits,  
etc. ... Infiniband (and others) can be pushed even further over  
silver conductors (or carbon or super conductors) ... Heck, you could  
make the above case for any protocol performance pushed to the upper  
limits of meat space (physical realities). Consider a protocol as  
being reliable or workable, when changing the means of connectivity  
to another medium. Optics can open up an order of magnitude  
performance gain = electrons v. photons, metal v. glass (or air or  
other) ... beyond a bus speed of 100 or even 500 Gbits / second @ a  
lower energy requirement ... Incremental performance improvements of  
late over metal conductors merely prove the need ... metal conductor  
data transmission is falling behind Moore's "law".

This is going to become very important, very soon as more advanced,  
dramatically improved performance Beowulf systems are built. My vote  
would be for the most hardware / firmware efficient protocol  
considering energy budget v. performance v. the space allowed. (Cray,  
IBM, et al, currently build clusters that have hundreds of horsepower  
devoted to heat dissipation ... just because they all use metal  
conductors for the bus. Imagine having a twin engined aircraft  
running inside your server farm ... being something most of mortals  
can not afford, let alone survive.)

I still pose the question: which hardware protocol would be optimum  
for tight clusters of processors sharing a common bus (or other local  
hardware network) in a Standard Temperature & Pressure environment?

"Raw Data" transmission ala legacy serial or parallel or SCSI or  
other ... ?
"Packet Switching Data" transmission ala Ethernet or USB or FireWire  
or other ...?
A "yet to be determined data" transmission methodology /  
topography ... ?

Ed Karns
FireWireStuff.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.clustermonkey.net/pipermail/beowulf/attachments/20060310/f9ca7ca9/attachment-0001.html>
-------------- next part --------------
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


More information about the Beowulf mailing list