Clusters Vs Grids

Gerry Creager N5JXS gerry.creager at
Tue Jul 22 07:40:08 EDT 2003

I'd offer that we're going to see grids grow for at least the forseeable 
(?sp?; ?coffee?) future.  I think we need to coin another term, however, 
for the applications that will run on them in the near term: 
"pathetically" parallel.  We've seen the growth of clusters, especially 
in the NUMA/embarrassingly parallel regime.  These have proven to work 
well.  Across the 'Grid' we appreciate today, we either see parallellism 
that simply benefits from distribution due to the vast amount of data 
and thus benefits from cycle-stealing, or applications that are totally 
tolerant of disparate latency issues.

But what does the future hold?  I can foresee an application that uses 
distributed storage to preposition an entire input dataset so that all 
the distributed nodes can access it, and a version of the Logistical 
Backbone that queues data parcels for acquisition and processing and 
manages the reintegration of the returned results into an output queue. 
  Along another line, I can envision an application prepositioning all 
the data across the distributed nodes and using an enhanced version of 
semaphores to to signal when a chunk is processed, then reintegrating 
the pieces later.

Done correctly, both of these become grid-enabling mechanisms.  They 
require atraditional thinking to overcome the non-exponential curve 
associated with network speed and latency.  They will benefit from the 
introduction of some of the network protocols we've come to know and 
dream of, including MPLS and some real form of QoS agreement among 
various carriers, ISP, Universities and other endpoints.  And they won't 
happen tomorrow.

IPv6 may enable some of this; QoS is integrated into its very fabric, 
but agreement on QoS implementation is still far from universal.  Worse, 
while carriers are looking at, or actually implementing IPv6 within 
their network cores, they are not necessarily bringing it to the edge. 
Unless you're in Japan or Europe.  Oh, I'm sorry, this *IS* a globally 
distributed list.  Is anyone from Level 3 or AT&T listening?

The concept of grid computing has taken me a while to embrace, and I'm 
not sure I like it yet.  Overall, I tend to agree with Mark's rather 
cynical assessment that it's a WorldCom marketting ploy that acquired a 
life of its own.


Mark Hahn wrote:
>>I'm having a hard time marrying the 2 concept of a cluster and a
>>grid together; but I'm sure much finer brains than mine have already
> "grid" is just a marketing term stemming from the fallacy that networks
> are getting a lot faster/better/cheaper.  without those amazing crooks 
> at worldcom, I figure grid would never have accumulated as much attention
> as it has.  I don't know about you, but my wide-area networking experience
> has improved by about a factor of 10 over the past 10-15 years.
> network bandwidth and latency is *not* on an exponential curve,
> but CPU power is.  (as is disk density - not surprising when you consider
> that CPUs and disks are both *areal* devices, unlike networks.)  so we should
> expect it to fall further behind, meaning that for a poorly-networked cluster
> (aka grid), you'll need even looser-coupled programs than today.
> cycle scavenging is a wonderful thing, but it's about like having
> a compost heap in your back yard, or a neighborhood aluminum
> can collector ;)
>>I'd appreciate that as well; "grids - hmmm - there're just the
>>latest computing fad - real high performance scientists won't use
>>them and grids will be just so much hype for many years to come".
> my users are dramatically bifurcated into two sets: those who want
> 1K CPUs with 2GB/CPU and >500 MB/s, <5 us interconnect, versus those
> who want 100 CPUs with 200KB apiece and 10bT.  the latter could be 
> using a grid; it's a lot easier for them to grab a piece of the 
> cluster pie, though.  I wonder whether that's the fate of grids 
> in general: not worth the trouble of setting up, except in extreme
> cases (seti at home, etc).
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or unsubscribe) visit

Gerry Creager -- gerry.creager at
Network Engineering -- AATLT, Texas A&M University	
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.847.8578
Page: 979.228.0173
Office: 903A Eller Bldg, TAMU, College Station, TX 77843

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list