[Beowulf] Re: Beowulf of bare motherboards (fwd)

Jack Wathey wathey at salk.edu
Wed Sep 29 14:03:08 EDT 2004

Andrew thought there would be enough general interest in this that I 
should post it to the list, so here it is.  My friend who helped me build 
this thing (which we call "ammonite") took some digital photos of it.
If anyone on the list wants to dedicate a little corner of his/her website
to pictures of this cluster, let me know and I'll try to get copies to 

  ---------- Forwarded message ----------
Date: Tue, 28 Sep 2004 10:41:36 -0700 (PDT)
From: Jack Wathey <wathey at salk.edu>
To: Andrew Piskorski <atp at piskorski.com>
Cc: Jean-Christophe Ducom <jducom at nd.edu>
Subject: Re: [Beowulf] Re: Beowulf of bare motherboards

On Mon, 27 Sep 2004, Andrew Piskorski wrote:

>> Unfortunately I don't have a website describing my cluster, but, if you're
>> interested, I could send you more details.
> Yes please, more details would be excellent!

This may be more detail than you want.  A picture probably is worth 1000 words 

The electronics:

100 dual Athlon nodes, Gigabyte Technologies ga7dpxdw-p motherboards, Athlon 
MP2400 processors, 1GB ecc ddr memory per node (Kingston).

Each motherboard has its own 250W pfc power supply: 

The CPU coolers are Thermalright SK6+ all-copper heatsinks with Delta 38cfm 
fan; thermal compound is Arctic Silver 3:

The switch is HP procurve 5308xl with one 4-port 100/1000-T module (model 
j4821a) and four 24-port 10/100-TX modules (model j4820a).  The server node is 
in a conventional mid-tower case with a scsi raid 5 system (Adaptec 2120s) and 
uses a Gigabit NIC (SysKonnect SK-9821).  The 99 client nodes (bare 
motherboards in the shelves) are diskless and boot via PXE using the 100Mbps 
on-board Ethernet interface.

Each client node is a motherboard, 2 cpus with coolers, memory, power supply, a 
sheet of 1/16" thick aluminum and NOTHING ELSE.  No pci cards of any kind, no 
video card.  The only connections are a power supply cord and a cat5e cable. 
The bios is set to boot on power-up and to respond to wake-on-lan.  There are 
17 surge protectors on the left and right ends of the shelving units, each of 
which supplies 6 client nodes, except one that only gets 3.  I bring the 
cluster up by turning them on in groups of 6, a few seconds apart.

The mechanical stuff:

The shelves are Tennsco Q-line industrial steel shelves: 

There are many alternative shelves that would work as well, and some are 
easier to asssemble than these, but these were easily adaptable to my 
client node dimensions.  Each 36" x 18" shelf has 9 client nodes on it, 
except for one shelf that has the Ethernet switch and controller for the 
blower (see below). The whole cluster is in a rack made from two shelving 
units.  Each shelving unit is 7ft tall by 3ft wide; the whole thing is 7ft 
x 6ft.  Each of the 2 units has seven 36" x 18" shelves.  If I had it 
to do over again, I might use the 36" x 24" size instead, because I had 
some problems with the power cords at the back interfering with the cross 
braces.  I ended up making my own cross braces on aluminum standoffs to 
get the extra clearance (yet another example of how this kind of approach 
ends up eating more time than you expect).  The seven shelves are 14" 
apart vertically, which gives about 12.6" vertical clearance between the 
top surface of a shelf and the underside of the shelf above it.  The top 
shelf just serves as the "roof" of the enclosure, so there are 6 usable 
shelves per unit, or 12 total for the whole 2-unit rack.  One, near the 
middle vertically, has the Ethernet switch and inverter.  The other 11 
have 9 nodes each, 4" apart horizontally.

Mechanically, a client node starts as a 17.75" X 12.5" sheet of 1/16 aluminum 
(6061 T6).  These were cut to my specs by the vendor, Industrial Metals Supply:

A hobbyist friend who has a milling machine in his garage did the drilling of 
the holes in the aluminum sheets.  He drilled them in stacks of 10. The 
locations of these holes need to be precise, and there were 13 holes per sheet 
(10 for motherboard standoffs).  Without my friend's milling machine and 
expertise, the drilling would have been a nightmare, and I would not even have 
attempted it.

The steel shelves are horizontal, of course, and the aluminum sheets sit on 
them vertically (perpendicular to shelf, 12.5" tall, 17.75" deep). The power 
supply also sits on the shelf, at the back of the rack, and is attached to one 
corner of the aluminum sheet with two screws through the sheet and 2 small 
90-degree steel brackets.  The PS is oriented so that its exhaust blows out the 
back of the rack.  The motherboard is mounted on the same side of the aluminum 
as the PS, oriented so that airflow (which is front-to-back through the rack) 
is parallel to the memory sticks. This also puts the cpus near the front of the 
rack, where the air is coolest.  Putting the PS at the bottom like this makes 
the node more stable.  A node will stand quite stably on the shelf, even though 
the only surfaces contacting the shelf are the PS and one edge of the aluminum 
sheet.  Even so, I attach the top front corner of each sheet to the shelf above 
it with a steel bracket and nylon thumbscrew, to make sure they won't dance 
around in an earthquake.  Removing a node is easy: just remove the nylon 
thumbscrew and it slides out.  The horizontal spacing of the nodes is limited 
to about 4" minimum by the minimum dimension of the PS and by the need for 
breathing room for the cpu coolers.

The front edge of every other shelf has a 2"x 1" cable duct, through which the 
cables are routed.  Near the switch, the ducts expand to 2"x2".  The cable 
ducts also serve as the mounting surfaces for 6 custom-made air filters, each 
of which is about 28" x 36" x 0.5" thick.  The filters are Quadrafoam FF-5X, 
60ppi half-inch thick, with aluminum grid support on both sides, from Universal 
Air Filters:


Although the filters do clean the incoming air, their main purpose is to 
provide just enough resistance to airflow to make the airflow uniform for all 
nodes in the rack.  Which brings us to...


The back of the rack is covered with a pyramid-shaped plenum made of 1-inch 
thick fiberglass duct board (Superduct RC AHS-200):

This leads to the intake of a 10,000 cfm forward-curve, single-inlet 
centrifugal blower with 5hp 3-phase motor:
http://www.grainger.com/  (search for Grainger part #7H071)

The speed of the blower is controlled by a Teco Westinghouse FM-100 inverter:

I run the blower at about half its rated speed most of the time, and this keeps 
the nodes happy.  Delta-T between intake and exhaust is about 10 deg F.  At 
full speed it drops to about 5 deg F.  The blower is quiet, especially at half 
speed.  Most of the noise comes from the Delta fans on the cpu coolers.

Like I said, I don't really advocate this as the best way to build a cluster, 
even though it has worked out well for me.  It took months of design and 
fabrication time.

Hope this helps,


Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list