Robert G. Brown rgb at
Thu Oct 23 07:52:14 EDT 2003

On Wed, 22 Oct 2003, Arthur H. Edwards wrote:

> I'm moving a cluster into a 9.25x11.75 foot room (7.75 ' ceiling). The
> cluster now has 48 nodes (single processor AMD XP 2100+ boxes). The will
> be on metal racks. Does anyone have a simple way to calculate cooling
> requirements? We will have fair flexibility with air flow.

My kill-a-watt shows 1900+ AMD Athlon duals drawing roughly ~230W/node
(or 115 per processor) (under steady, full load).  I don't have a single
CPU system in this class to test, but because of hardware replication I
would guess that one draws MORE than half of this, probably ballpark of
150-160W where YMMV depending on memory and disk and etc configuration.
Your clock is also a bit higher than what I measure and there is a
clockspeed dependence on the CPU side, so you should likely guesstimate
highball, say 175W OR buy a <$50 kill-a-watt (numerous sources online)
and measure your prototyping node yourself and get a precise number.

Then it is a matter of arithmetic.  To be really safe and make the
arithmetic easy enough to do on my fingers, I'll assume 200 W/node.
Times 48 is 9600 watts.  Plus 400 watts for electric lights, a head node
with disk, a monitor, a switch (this is likely lowball, but we
highballed the nodes).  Call it 10 KW in a roughly 1000 cubic foot

One ton of AC removes approximately 3500 watts continuously.  You
therefore need at LEAST 3 tons of AC.  However, you'd really like to be
able to keep the room COLD, not just on a part with its external
environment, and so need to be able to remove heat infiltrating through
the walls, so providing overcapacity is desireable -- 4-5 tons wouldn't
be out of the question.  This also gives you at least limited capacity
for future growth and upgrade without another remodelling job (maybe
you'll replace those singles with duals that draw 250-300W apiece in the
same rack density one day). 

You also have to engineer airflow so that cold air enters on the air
intake side of the nodes (the front) and is picked up by a warm air
return after being exhausted, heated after cooling the nodes, from their
rear.  I don't mean that you need air delivery and returns per rack
necessarily, but the steady state airflow needs to retard mixing and
above all prevent air exhausted by one rack being picked up as intake to
the next.

There are lots of ways to achieve this.  You can set up the racks so
that the node fronts face in one aisle and node exhausts face in the
rear and arrange for cold air delivery into the lower part of the node
front aisle (and warm air return on the ceiling).  You can put all the
racks in a single row and deliver cold air as low as possible on the
front side and remove it on the ceiling of the rear side.  If you have a
raised floor and four post racks with sidepanels you can deliver it from
underneath each rack and remove it from the top.

This is all FYI, but it is a good idea to hire an actual architect or
engineer with experience in server room design to design your
power/cooling system, as there are lots of things (thermal power kill
switch, for example) that you might miss but they should not.  However,
I think that the list wisdom is that you should deal with them armored
with a pretty good idea of what they should be doing, as the unfortunate
experience of many who have done so is that even the pros make costly
mistakes when it comes to server rooms (maybe they just don't do enough
of them, or aren't used to working with 1000 cubic foot spaces).

If you google over the list archives, there are longranging, extended
discussions on server room design that embrace power delivery, cooling,
node issues, costs, and more.


> Art Edwards

Robert G. Brown	             
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list