Network Booting

Robert G. Brown rgb at
Thu Jun 19 15:53:05 EDT 2003

On Thu, 19 Jun 2003, torsten wrote:

> >> 7. Does the  Linux Terminal Server  Project (LTSP) have  anything useful
> >> that I could/should be using?
> >
> >See answer to 6.  Conceivably, but it depends on what you're trying to
> >do.  So what are you trying to do?
> Thank you for the very complete answers.  This solves a number of concerns
> I have run into.
> I am trying setup a cluster to run MPICH.  Some of the PC's are
> headed workstations, the rest will be headless (for MPICH only).
> While we have manually (via CD-ROM) setup the workstations, I want to
> automate the addition and maintenance of headless PC's.

An excellent idea.  There are a number of resources linked to the brahma

that should help you out.  There are lots of ways to do it.  We use
DHCP+PXE+KICKSTART+YUM here (and there is moderate documentation of how
to go about it on one of the brahma references) but there is also FAL
(or is it FAI) for debian, there is Scyld (if you want a commercial
solution that makes your cluster pretty much plug-n-play for MPI jobs),
there is clustermatic and bproc, and Mandrake and SuSE have solutions of
their own I'm less familiar with.

> I am trying to use DHCP with PXE booting to boot the PC's.  Any advice
> toward this general direction, I am grateful for.

Hopefully some of the many links I've thrown at you will be of use.  I
don't think it is useful for me to write a HOWTO directly to the whole
list, though... especially when there is so much excellent documentation
now online.


P.S. -- I can now say that Mosix is not a good idea.  I'd suggest
picking a package-based distribution that supports automated
installation AND an automated package maintenance tool and go with it OR
pick Scyld, dicker out a reasonable license fee for your small cluster,
and go that way.  Your long term interests are not just automating
installation but automating REinstallation, running maintenance package
updates and upgrades, and LAN operation (accountspace, shared filespace
and so forth).  An e.g. RH based cluster is one approach that makes your
cluster very much like a LAN of workstations, just some of those
workstations have no heads and are used pretty much only for MPI or PVM
or EP/script managed computations.  The Scyld cluster makes your cluster
nodes into "a supercomputer" -- a virtual MP machine -- typically with a
single head, a custom task/file loader and clusterwide PID space.  If I
understand it correctly (I'm sure Don will correct me if I'm
misrepresenting it:-).  Your current model sounds more like a NOW
cluster with multiple servers and points of access and nodes you can
login to (you don't log into a "node" in a Scyld cluster any more than
you log into a "processor" in an MP machine:-) but I don't know if that
is by deliberate design or just what you knew how to do.

Robert G. Brown	             
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list