[Beowulf] motherboards for diskless nodes
rsweet at aoes.com
Fri Feb 25 12:52:59 EST 2005
On Fri, 25 Feb 2005, Craig Tierney wrote:
>> why are you going diskless?
>> IDE hard drives cost very little, and you can still do your network
>> Pick your favourite toolkit, Rocks, Oscar, Warewulf and away you go.
> IDE drives fail, they use power, you waste time cloning, and
> depending on the toolkit you use you will run into problems
> with image consistency.
> I have run large systems of both kinds. The last system was
> diskless and I don't see myself going back. I like changing
> one file in one place and having the changes show up immediately.
> I like installing a packing once, and having it show up immediately,
> so I don't have to reclone or take the node offline to update
> the image.
I think the term "diskless" is sometimes the problem when discussing centrally
installed and managed systems. Lots of "diskless" cluster have GB and GB of
local disks, only they are used for swap and temp I/O, not for the OS.
In 2000 I switched from locally installed system images (using the very good -
even back then - system-imager) to using either nfsroot or warewulf style
diskless systems, but have retained the local disk for scratch I/O. While I
can understand debating over the merits of nfsroot vs RAM-disk root, I fail to
see many useful arguments for maintaining a local OS install. However, that
doesn't mean that local disks are bad. It all depends upon the application,
of course, but in many cases its hard to beat the local disk for temporary
I/O, especially if you don't have gobs and gobs of RAM to spare. Also PVFS is
sufficiently mature that you can easily combine all of the (very cheap) local
disks into a large parallel filesystem. Using nfsroot you can switch from one
"system image" (really just an nfsroot file tree) to another one with a simple
reboot. You have all of the advantadges of central configuration and control
combined with the convenience and speed of local I/O and local swap.
It can be _very_ useful in a situation where you have to support multiple user
communities with wierd apps or strange requirements. Using pxeboot and
pxelinux, I've set up systems where the queue system could even request that a
node use a specific system configuration before starting the job (eg: must
have linux 2.4 with checkpointing in the kernel). Nodes might be available,
but running another nfsroot cluster system image (say they are running RHEL,
with no checkpointing, or for compatibility with some other commercial app
they are running RH 7.2). The queue system tells the cluster master to
reconfigure pxelinux so that the requested nodes default to the required
config, by pointing them at another nfsroot tree. The cluster master tells
the nodes to reboot, and when they are rebooted and running the appropriate
image, the job runs. That sort of config requires a lot of glue, but it would
be way too much headache to even attempt without "diskless" systems.
Ryan Sweet <ryan.sweet at aoes.com>
Advanced Operations and Engineering Services
AOES Group BV http://www.aoes.com
Phone +31(0)71 5795521 Fax +31(0)71572 1277
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf