ideal motherboard for diskless beowulf

Robert G. Brown rgb at phy.duke.edu
Mon Jun 16 13:36:54 EDT 2003


On Sat, 14 Jun 2003, kolokoutsas konstantinos wrote:

> 
> Thank you all for the input! 
> 
> This beowulf will be dedicated to running one
> particular Monte Carlo particle acceleration code
> already running within RH7.2 and quite dependant on it
> in many ways, thus the RH7.2 criterium. 

I'm sorry, how is that?

  a) Can't you just recompile under 7.3 or for that matter 9?

  b) I would expect most applications compiled for 7.2 to just plain
"run" on 7.3, at least.  Almost by definition, the major libraries don't
change much between minor revision increments.  It would certainly be
worth it to try running it your application on 7.3.  It would also be
worth it to start porting your code into a source RPM format so that it
can just be rebuilt in the future in five minutes whenever you want to
run it on an RPM-supporting architecture.a

> The 12-node config will serve as a test for a larger
> cluster, thus the very limited budget, and the choice
> of (the cheaper) AMD CPUs. 
> 
> The micro-atx form factor is of interest because "I
> was given the challenge..." of putting as many
> motherboards in one customized full tower box as
> possible. Dual/Quad CPU motherboards are not an
> option, while due to portability issues, racks are out
> of the question. 

Hmm, I don't know what you mean by "portability issues either.  I've
built both tower/shelf clusters and rack clusters and set them up in
e.g. Expo booths for a three day demo..  Tower/shelf clusters are a
total PITA to transport.  Rack-based clusters are often much easier and
achieve a higher CPU density >>if<< you use a rolling rack.

At Linux Expo three or four years ago we built a dual rack Netfinity
cluster with dual P3 nodes, kindly loaned to us by IBM.  The whole thing
came off the truck and was set up and running (48 CPUs) in a matter of
four hours or so.  Taking it down was even faster -- a couple of hours
start to finish.  It took me almost as long to set up my much smaller
tower cluster on a rolling shelf unit I brought from home, with all the
cabling and carrying.

Nowadays, there are some REALLY cool rolling four post racks.  Check out
the "department server rack" on

  http://www.phy.duke.edu/brahma/brahma_tour.php

(right above "Seth, Looking Grumpy":-).  I don't remember how many U it
is, but at a guess perhaps 20U.  One could EASILY fit 16 1U cases in it
with room to spare, or 32 (or more) CPUs.  Monitor and KVM (switch) on
top, middlin' hefty UPS on the bottom, and you can literally roll it
from room to room without even powering it down!  And this is only one
choice -- I'll bet there are a variety of options even here.  When you
are ready to scale up, just buy two post racks and move the nodes into a
permanent home...

If your money is REALLY tight perhaps you can't afford this, but if you
are trying to "sell" the result a rolling rack is going to beat the
pants off of a jury riggged tower setup in crowd appeal...

   rgb

> 
> Thanks once again,
> Kostas Kolokoutsas
> 
> 
> 
> 
> __________________________________________________
> Yahoo! Plus - For a better Internet experience
> http://uk.promotions.yahoo.com/yplus/yoffer.html
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu



_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list