[Beowulf] Re: Beowulf digest, Vol 1 #1656 - 5 msgs

Mark Hahn hahn at physics.mcmaster.ca
Thu Feb 5 23:26:37 EST 2004

> When purchasing a cluster or cluster hardware, one can spend as little as 20 
> Euro ( ~30 CAD) per node on interconnects

or less, actually.  you seem to be thinking of gigabit, which is indeed a 
very attractive cluster interconnect.  otoh, there are lots of even more
loosely-coupled, non-IO-intensive apps that run just fine on 100bT.

> to more than 1000 Euro per node for 
> Myrinet or Scali.

or IB.

> The Fusion-MPT chipset adds about 100 Euro to the cost of a motherboard. 

yes, obviously.  I'd probably rather have another gigabit port or two;
bear in mind that some very elegant things can be done when each node has
multiple network connections...

really, the chipset isn't the point; it's just a $5 coprocessor.  what counts
is coming up with a physical layer, including affordable switches, and
somehow getting millions of people to make/buy them.

> 100 
> Euro per node is much eaier to justify than 1000 Euro per node when the 
> Cluster when the cluster will not be primarly running tighly coupled parallel 
> problems. 

hmm, we've already established that gigabit is much cheaper, and for
loose-coupled systems, chances are good that even 100bT will suffice.

> If the performance of MPI of Fusion-MPT is much better than than 
> Ethernet with good latency,

but does it even exist?  so far, all I can find is two lines on a marketing glossy...

> it becomes a cheap way to add flexibilty to a 
> cluster.

many things could happen; I'm not optimistic about this Fusion-MPT thing.
it seems to fly in the face of "do one thing, well".

> Here is some info about it the Chipset... 
> http://www.lsilogic.com/files/docs/marketing_docs/storage_stand_prod/
> integrated_circuits/fusion.pdf

that's the vapid marketing glossy.

> http://www.lsilogic.com/technologies/lsi_logic_innovations/
> fusion___mpt_technology.html

that is even worse.

> There is also information in the in the linux kernel documentation about 
> running MPI over this kind of interconnect.

I'm not sure what "kind" here means, do you mean over scsi?  the traditional
problem with *-over-scsi (and there have been more than a couple) has been
that scsi interfaces aren't optimized for low-latency.  the bandwidth isn't
that hard, really - 320 MB/s is around Myrinet speed, and significantly
slower than IB.  OK, how about FC?  it's obviously got an advantage over U320
in that FC switches exist (oops, expensive) but it's really just a 1-2 Gb
network protocol with 2k packets.  as for the "high performance ARM-based
architecture" part, well, I must admit that I don't associate ARM with high
performance of the gigabyte-per-second sort.  

personally, I'd love to see sort of the network equivalent of the old
smart-frame-buffer idea.  practically, though, it really boils down to the
gritty details like availability of switches, choosing a physical-layer
standard, etc.  gigabit is the obvious winner there, but IB is trying hard
to get over that bump...

(Myri seems not to be very ambitious, and 10G eth seems to be straying into
a morass of tcp-offload and the like...)

regards, mark hahn.

Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list