[Beowulf] Re: Finally, a solution for the 64 core 4TB RAM market

Gerry Creager gerry.creager at tamu.edu
Fri May 29 08:57:19 EDT 2009


I don't think I want to go that path.  For one thing, it's possible to 
have node I/O overrun internode communications.  Not real likely if one 
goes to DDR  or QDR Infiniband but...

Also, internode communication CAN become a bottleneck for MPI 
applications if one has too many cores on one node, but still needs a 
large number of cores to achieve MPI scaling.  Internode communication 
will almost certainly be (significantly) slower than intranode 
communications.  At some point your application will have to account for 
synchronization.  If it's well-written, perhaps it does that now, but if 
not, then finding where to insert it becomes a thesis level problem (or 
a shotgun solution).

This all is particular to MPI codes.  If your spplications are pThreads 
or OpemMP, then more cores and more memory per node is a good way to go.

It all depends on the applications mix for the cluster.

gc

Jonathan Aquilina wrote:
> would i be wrong in say thought that thre might be people who want to 
> increase their processing power in one box but shrink the size of the 
> cluster?
> 
> On Fri, May 29, 2009 at 7:26 AM, Mark Hahn <hahn at mcmaster.ca 
> <mailto:hahn at mcmaster.ca>> wrote:
> 
>             while I like the idea of these being available, I wonder
>             where the
>             real (big) market is.
> 
> 
>         You mean other than commercial / HR databases that were built on
>         Sun's SMPs and now have a questionable upgrade path?
> 
> 
>     when I look at Sun's high-end boxes, I see either webservers
>     (the many-thread stuff) or kind of sad old-fashioned sparc minis.
>     the latter would probably lose out to a decent modern dual-socket
>     box (dual nehalem like the HP DL760).  the question is how much
>     volume there is in the >= 8-socket market, and I don't mean "how
>     many PHB's can be persuaded they need one because they're important".
> 
> 
>         Likely replacing current mid-range, <100-node clusters with a
>         single box.
> 
> 
>     unclear to me.  a current mid-range 100-node cluster is 800 cores,
>     and I don't think we're talking about that in an SMP.  Intel's recent
>     nehalem-ex preview was 128 hyperthreads (64 real).
> 
>     I would guess that most people who currently have clusters would
>     rather get bigger/faster/cooler clusters, rather than go to SMP,
>     unless for some
>     reason they have a fixed problem size.  possible, I guess.
> 
>     _______________________________________________
>     Beowulf mailing list, Beowulf at beowulf.org
>     <mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
>     To change your subscription (digest mode or unsubscribe) visit
>     http://www.beowulf.org/mailman/listinfo/beowulf
> 
> 
> 
> 
> -- 
> Jonathan Aquilina
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-- 
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University	
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list