Article Index

John Hearn responded that he thought Jim's suggestions were good ones. John went on to mention that he saw a machine room in England that he liked. They used power connectors that came down from the ceiling. Robert Brown replied that they too had power connectors coming from the ceiling.

In a separate post, Robert added some things to the list of needed items in a server room. The first item he would add to the list is a workbench, with various tools and good lighting. He would also add a small KVM, a flat panel keyboard/video, and other various little bits and bobs for doing your wizardry. Robert also recommended a nice swivel chair for working at the workbench as well as headphones to cover up the machine room noise and for listening to your music collection while you work. He also suggested that, in general, you want to try to engineer the room for growth now before the renovation work begins. He also suggested getting some additional HVAC - perhaps 10 tons - to adapt for future growth without having to "re-renovate" the room.

John Hearns jumped in to add that a nice 19 inch rack mounted fridge would be good as well as the 4U wine rack and provided links to both items. RGB then responded to John's posting with some sage advice about drinking and computing. However, one should always be open to new ideas and possibilities.

Robert then posted some details about the workbenches they use. He said that they use a leftover wooden workbench from a Physics lab (wood being the key word). He then detailed much of what they use when they diagnose/repair/build nodes. Finally, he had some advice about whether to choose a support option with the various vendors or to support the systems yourself.

Jakob Oestergaard had some very good advice for an addition to the perfect machine room. Jakob though a good first aid kit would be a very worthwhile addition. He also echoed other recommendations for several flashlights. Chris Samuel also mentioned that you should have spare batteries for the flashlights as well.

The continuing discussion about the physical aspects of beowulfery is showing that people are seriously considering how to properly design their cluster environment.

Local Disk or NFS Root?

One of the topics discussed on the Beowulf mailing list is how people construct their clusters from an operating system perspective. There are many ways you can "construct" your cluster. On July 14, 2004, Brent Clements asked whether people preferred to install the operating system (OS) on each node or to use nfsroot for the compute nodes. Brent said that in his experiments, using nfsroot was a heck of a lot easier that maintaining a systemimager configuration (they used systemimager for installing Linux on the nodes).

Tim Mattox replied that Brent had missed a third option - using a RAM disk as the rootfs. Tim said that for years he had had done both nfsroot and disk-full (meaning each node had a disk with the OS installed on the disk) clusters. He said that the RAM disk approach is head and shoulders above the others. He recommended examining Warewulf which he through to be a very good cluster distribution. In fact he said that he liked Warewulf so much, he became one of the developers. Tim also went on to mention that a drawback to the nfsroot approach was that if the NFS server was rebooted or down for any length of time, the compute nodes tend to fail.

Mark Hahn said that he thought the nfsroot approach created quite a bit of traffic, but that that the nfsroot approach was incredibly convenient. Mark said that he has built a couple of clusters using nfsroot (around 100 dual nodes) and that there has not been any significant problems with the NFS server. He likes the nfsroot approach so much that he said that if there were problems he would split the NFS traffic across two file servers rather than abandon nfsroot.

Tony Travis posted that he has a 32-node AMD Athlon cluster running ClusterNFS using an openMosix kernel. He exported the root partition as read-only and the compute nodes have symlinks for volatile files. The compute nodes have a disk in them for use as temporary space and swap.

Sean Dilda posted that he prefers the local disk approach. He felt that using local disks made the cluster much more scalable. However, he did say that maintaining disk images was a pain. He uses a kickstart configuration rather than an image to make his life a bit easier.

Kimmo Kallio posted that he uses a solution where the nodes boot over the network and create a ramdisk to get things going. The next step in the booting process checks/creates the partitions and file systems and then copies the root fs to the local drive, does a pivot_root and abandons the ramdisk. The next step is to copy over everything else in .tar.gz files and untar them. There are some other steps that he takes to build the nodes. He thought that this approach has the performance and server independence benefits of local disk but has the low maintenance approach of network booting.

Topics such as this one are always very interesting because they serve to develop "best practice" information for people considering clusters.

This article was originally published in ClusterWorld Magazine. It has been updated and formatted for the web. If you want to read more about HPC clusters and Linux you may wish to visit Linux Magazine.

Jeff Layton has been a cluster enthusiast since 1997 and spends far too much time reading mailing lists. He can found hanging around the Monkey Tree at ClusterMonkey.net (don't stick your arms through the bars though).

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.