[Beowulf] best Linux distribution
deadline at eadline.org
Tue Oct 9 08:32:19 EDT 2007
Excellent point. I have often thought that "diskless" provisioning
opens up lots of opportunities to create custom node groups
based on kernels or distributions. Throw in a virtualized
head node and many ISV requirements could be handled this way
e.g. a virtualized Suse environment running on top of Red Hat
could request 32 Suse nodes from the scheduler (running under a
Red Hat instance). The scheduler just provisions nodes as needed
and sets them in a low power state when not being used.
Going with fully virtualized nodes is another option provided
the applications are still close to hardware.
Note that diskless provisioning does not imply diskless nodes,
if you need local drives, then you can still use them in a
diskless booting scheme. Not nailing an OS to the hard drive
on cluster nodes has lots of advantages.
> On Mon, 8 Oct 2007, Robert G. Brown wrote:
>> RHEL/Centos are good where vendors require "binary compatibility" on
>> closed source software, as the standard of said binary
> What strikes me in this whole discussion is the ideea of 'one
> distribution fits all' when applied to all nodes of a cluster and all
> applications that run on that cluster. In the days of PXE booting,
> with several solutions readily available for either building a node
> from scratch (like kickstart) or booting a prebuilt setup with
> NFS-root or ramdisk, what's so difficult in matching on request a
> node, an application and a distribution/custom setup ?
> Real case: A quantum mechanics code that we have bought some years ago
> was provided only as staticly-linked binaries. They have worked fine
> on the current distros at that time and we have succesfully used them
> on CentOS-3 (2.4 kernel). However we discovered the hard way on the
> new CentOS-5 (2.6 kernel) that the statically linked binaries didn't
> work anymore as the kernel interfaces have changed - but, after a few
> lines were changed in the config files and the nodes rebooted, the
> binaries were again happily running in their required configuration.
> Of course, the admin is responsible in defining which
> distributions/custom setups can run on a certain node, based on the
> hardware of that node and the kernel of the distribution/custom setup.
> But after this is done, the user can limit his/her jobs to running on
> these nodes or ask the queueing system to set up a node according to
> the requirements of the job (I think that term is 'provisioning').
> Sure, it helps in this case to run a distribution with long support
> (like RHEL/CentOS/SL, SLES or Ubuntu LTS) such that you don't have to
> waste too much time yourself with updates, especially security related
>> Far short of Debian, but plenty big enough to include just about all
>> mainstream useful packages for any cluster or LAN.
> I'm making sure that any cluster related package that is part of the
> default distribution is not part of what the nodes get to run. Why ?
> Because very often the common ground options used for building the
> package (which is a good idea for a widely used distribution) don't
> fit _my_ setup. So, I take the fact that the distibution offers me all
> the needed tools as a fallback, but I'm always trying to match as well
> as possible all the components. And if you search the archives of the
> LAM/MPI mailing lists you'll see the larger picture...
> Bogdan Costescu
> IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
> Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
> Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
> E-mail: Bogdan.Costescu at IWR.Uni-Heidelberg.De
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf