[Beowulf] Cooling vs HW replacement

Robert G. Brown rgb at phy.duke.edu
Tue Jan 18 00:50:08 EST 2005

On Mon, 17 Jan 2005, George Georgalis wrote:

> If you really want to focus on efficiency and engineering, I bet one
> (appropriately sized) power-supply per 3 or 5 computers is a sweet spot.
> They could possibly run outside the CPU room too.

For a smallish cluster, I actually was just communicating with somebody
who has just such a cluster -- laid out on open shelves, one OTC PS per
shelf, three mobos/shelf, no chassis at all, largish fans blowing right
over the shelf mount. All it required is a bit of custom wiring harness
to distribute the power on down the shelves.

Regarding disks, most computers don't NEED local hard drives any more
for many/most computations.  So skip the floppy, the HD, any CD drive --
just get lots of memory (to act as a de facto ramdisk), CPU, PXE NIC and
video (the latter onboard).  This saves power, saves money, gives you
fewer components to fail, and leaves you with money to buy better AC.

But remember, also -- you MUST remove all the heat that you generate or
things will get hotter and hotter as they operate.  Putting e.g. PS's
outside the room or inside the room just alters where you have to remove
the heat from or what components you're going to choose to run hotter.

I'll try to talk the owner of the cluster into posting his cluster URL.
I really want him to consider writing it up for e.g. CWM.


Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list