[Beowulf] cluster advice

Glen Gardner Glen.Gardner at verizon.net
Tue Sep 20 23:26:33 EDT 2005


dual core cpu on a single cpu motherboard ought to be a great performer.

the workaround for memory bandwidth issues is to be sure you use a
smaller number of large memory modules as compared to a larger number of
smaller memory modules so that fewer modules are competing for
bandwidth.

Competition for memory bandwidth ought to be no worse on singe cpu vs
dual cpu, assuming dual core for both.  Since present linux kernels use
NUMA on opteron MP, so any available memory location is accessible by
any cpu so bandwidth for the memory controller could, in principle,
become a bottleneck no matter what you do. 

The thing is.. at some point, some part of the system hardware will
bottleneck no matter what you do. So where do you want your bottleneck
to be at ?  CPU ? memory ? I/O ?


Glen




On Tue, 2005-09-20 at 09:44 -0700, Michael Will wrote:
> a single cpu with dual core has one drawback that might hit you 
> especially on image processing:
> 
> The memory architecture of an opteron board is such that each cpu has 
> its own bank of memory,
> with it's own dual channel memory controller. Access to the ram of the 
> other cpus happens transparently
> to your application which sees only one memory space, using the 
> hypertransport. This gives the opteron
> a significant advantage over the Xeons for memory intense applications 
> such as Blast.
> 
> If you only use one CPU socket, you can only put in half the amount of 
> RAM (8G instead of 16G, might not
> be an issue to you) and you have only one instead of two memory 
> controllers.
> 
> Effectively this can lead to contention similar to what the xeons by 
> design have to deal with through the frontsidebus
> architecture.
> 
> Also you will need twice as many chassis compared to using dual cpu dual 
> core nodes.
> 
> If cooling is an issue, you might want to look at low-voltage CPUs. It 
> will allow you to have more cpu cycles
> per watt, even if the individual cpu is slower.
> 
> You seem to be in a tight spot with budget and power constraints.
> 
> Michael Will
> 
> 
> asa hammond wrote:
> 
> > Hello all. I am going to be building a cluster to handle image  
> > rendering and processing and I wanted to get some advice.  The  
> > cluster will have 2 kinds of problems to crunch on,
> > 1) heavy cpu style jobs with moderate IO and
> > 2) heavy IO jobs with up to 200 meg image retrieval off of the file  
> > server and moderate cpu requirements.
> >
> > We have 10-14k to spend on the nodes at the moment.
> >
> > What are your thoughts on the best node configurations to purchase.
> > If we go with dual core athlon64 x2's  we can get about 10-12 nodes  
> > with single cpu motherboards.
> > Any reason to go with dual opterons over the dual core athlon64s if  
> > the cache sizes and clock speeds are the same?
> >
> > Any alternative case systems you all have experience with?  Do any of  
> > you run without cases a la early google? Everything at the moment is  
> > rackmount cases but we figure if we are using micro atx boards we  
> > could fit two boards per 1U front to back with good cooling on  
> > aluminium rackmount trays.  Anyone doing such extreme cost-cutting  
> > measures to save on the obscene rackmount case+rail costs?
> >
> > Any mobo recommendations?  What is your feeling about the micro atx  
> > boards?  not longterm reliable?  We don't need any of the fancy  
> > extras most gamer and server motherboards have. All we need to have  
> > is pxe bootable gigabit(dual on mobo would be great), slots for ram  
> > (2 gig) and a cpu.  We can go with a gigabit on the board and add an  
> > extra nic to get two total, etc.
> >
> > We are interested in going the channel bonding route for better than  
> > gigabit throughput with 2 ports per node, each feeding into a  
> > separate switch connected to separate ports on the server.
> >
> > Any switch recommendations?  Do I need to go with a layer 3 switch if  
> > all I am doing is running my machines this way?  We are thinking  
> > several 16 port netgears. We want to go for future-proof  
> > expandability as we will be adding nodes as we can afford them.
> >
> > Heating is becoming an issue as well.  We have  an 8x10x6 foot room  
> > with no AC. Just forced air cooling.  Any good cooling advice for  
> > remote AC as well as any kind of quiet passive cooling would be very  
> > useful.  We need about 1.5 ton of cooling as far as I can figure.   
> > Ambient air temp is 73 deg and room is running at 84-89.
> >
> > Pointers to any info would be great.  This has all been discussed  
> > before but now those recommendations are a bit outdated as the march  
> > of moore goes on.
> >
> > Thank you all in advance.  This list has tons of great information.   
> > A great resource for all of us.
> >
> > Asa
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit 
> > http://www.beowulf.org/mailman/listinfo/beowulf
> 
> 
> 
> -- 
> Michael Will
> Penguin Computing Corp.
> Sales Engineer
> 415-954-2822
> 415-954-2899 fx
> mwill at penguincomputing.com 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list