[Beowulf] cluster building for teaching (on the cheap)

Mark Hahn hahn at physics.mcmaster.ca
Sun Aug 27 16:52:31 EDT 2006


> better).  Additionally, the system needs to be cheap (less than $5k).

why spend anything?

> After about 4 hours of diffusing around the web it has become clear to me 
> that most (all?) commercial solutions are too expensive and I should try to 
> put something together myself.

clusters are easy.  commercial solutions are, IMO, either pure CYA
or for people who really have no interest beyond pressing "go" and 
getting the results later.

> In browsing NewEgg last night one interesting 
> solution was to set up 2  dual processor, dual core machines (ie 2 
> motherboards, 4 AMD 1.8GHz Opteron 265's, 8 total execution cores).  Spec'ed 
> out (1GB Ram per core, P-ATA hard drives), this looks like about $500 per 
> execution core.

there's no good reason to go PATA.

I haven't priced machines recently, but would still expect that 4 machines 
with single-socket, dual-core (X2) chips would be cheaper.  obviously, 2
machines makes for a somewhat odd "cluster" experience, since it minimizes
the importance of the network (or message passing at all.)

> (1) Does Linux/MPICH/gcc/g95 work pretty well with dual core opteron 
> processors?

of course.

> (2) Am I better off buying 8 of the cheapest Dells I can find and networking 
> those together?

Dell has nothing special.  without a clear requirement to emphasize memory
bandwidth per core, I'd always default to dual-core, I think.  so four
dual-core desktops would be a wise choice.

> (2.5) Do you pay a premium for a 1-u or 2-u enclosure?

of course!  not to mention that you pay a premium for "server" motherboards 
that support dual-socket.  for a small cluster, 1u is a waste of money.  2U
gets you more space for disks (and permits the use "desktop" of boards,
which have 3-high audio sockets that don't really fit in 1u.)  other wise,
3U is starting to at least get into the domain where you can use normal 
desktop (aka mass market) parts.

but for small clusters, go with cheap desktopish cases, preferably those with 
a sane airflow design (front-to-back, preferably bottom-to-top, with a large 
exhaust fan.)

> (3) In general (processor type, peripherals held constant), is it cheaper to 
> buy 2x standard processor boxes, 1 dual processor box, or half of a dual 
> processor, dual core box?

but processor types are not constant.  the question devolves to specific
choices: going rackmount at all is a huge price premium.  once you go
rackmount, a lot depends on the overall power rating, chassis depth,
features, as well as height-in-u.

dual-socket motherboards are often more than twice the cost of single-socket
ones.  going dual-socket usually also implies registered/ecc dram, another
price premium (and another feature critical to large clusters, but not small.)
dual-socket tends to also imply a higher-end power supply, which when
compounded with a small rackmount chassis leads to higher price as well.
disks and network are about the only factors that can be held constant.
of course, you can dispense with disks entirely (makes a lot of sense in some 
instances, but then again, disks are cheap.)  (actually, dispensing with 
disks makes the most sense with either huge or very cheap clusters: on a huge 
cluster, you might prefer to avoid the maintenance overhead; for very cheap 
nodes, a $60 disk is significant.)

for a starter cluster, I would definitely choose a handful of cheap desktop
machines.  no reason they can't be repurposed; no reason they need to all be 
the same configuration.

higher-end (dual socket, rackmount, "server-grade", ECC) does get you somewhat
higher reliability, but does this matter to you?

higher-end also opens options for things like IPMI (console-less control and 
monitoring is _essential_ for large clusters.)

regards, mark hahn.
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list