Filesystem question (sort of newbie)

Robert G. Brown rgb at
Fri Oct 3 11:24:48 EDT 2003

On Fri, 3 Oct 2003, Cannon, Andrew wrote:

> Each computer will be running Red Hat (either 8 or 9 I haven't decided yet,
> any advice is still appreciated), and I was wondering how to best organise
> the disks on each node. 
> I am thinking (only started wondering about this today) of installing the
> cluster software on the master node (pvm, MPI and the actual calculation
> software, MCNP) and mounting the disk on each of the other nodes, so that
> all they have on their hard drives is the minimal install of RH. The
> question I am asking is, will this work and what sort of performance hit
> will there be? Would I be better installing the software on each computer?
> TIA (sorry for being so stoopid, I'm still very much a learner at linux and
> clustering)

If the nodes have lots of memory, most of their access to non-data disk
(programs and libraries) will come out of caches after the systems have
been up for a while, so they won't take a HUGE performance hit, but
things like loading a big program for the first time may take longer.

However, if you work to master PXE and kickstart (which go together like
ham and eggs) and have adequate disk, in the long run your maintenance
will be minimized by putting energy into developing a node kickstart
script.  Then you just boot the nodes into kickstart over the network,
wait a few minutes for the install and boot into production.

This will take you some time to learn (there are HOWTO-like resource
online, so it isn't a LOT of time) and if you got nodes with NICs that
don't support PXE you'll likely want to replace them or add ones that
do, but once you invest these capital costs the payback is that your
marginal cost for installing additional nodes after the first node you
get to install "perfectly" is so close to zero as to make no nevermind.
Make a dhcp table entry.  Boot node into install.  Boot node.
Reinstalling is exactly the same process and can be done in minutes if a
hard disk crashes.

It gets to be so easy that we almost routinely do a reinstall after
working on a system for any reason, including ones where it probably
isn't necessary.  You can reinstall a system from anywhere on the
internet (if your hardware is accessible and preconfigured for this to

Finally, if you include yum on the nodes, you can automagically update
the nodes from a master repository image on your server, and mirror your
server image from one of the Red hat mirrors, and actually maintain a
stream of updates onto the nodes with no further action on your part.

At this point, if you aren't doing Scyld or one of the preconfigured
cluster packages and want to roll your own cluster out of a base install
plus selected RPMs (and why not?) PXE+kickstart/RH+yum forms a pretty
solid low-energy paradigm for installation and maintenance once you've
learned how to make it work.


> Andy
> Andrew Cannon, Nuclear Technology (J2), NNC Ltd, Booths Hall, Knutsford,
> Cheshire, WA16 8QZ.
> Telephone; +44 (0) 1565 843768
> email: mailto:andrew.cannon at
> NNC website:
> NNC's UK Operating Companies : NNC Holdings Limited (no. 3725076), NNC Limited (no. 1120437), National Nuclear Corporation Limited (no. 2290928), STATS-NNC Limited (no. 4339062) and Technica-NNC Limited (no. 235856).  The registered office of each company is at Booths Hall, Chelford Road, Knutsford, Cheshire WA16 8QZ except for Technica-NNC Limited whose registered office is at 6 Union Row, Aberdeen AB10 1DQ.
> This email and any files transmitted with it have been sent to you by the relevant UK operating company and are confidential and intended for the use of the individual or entity to whom they are addressed.  If you have received this e-mail in error please notify the NNC system manager by e-mail at eadm at
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or unsubscribe) visit

Robert G. Brown	             
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list