[Beowulf] best archetecture / tradeoffs

Greg M. Kurtzer gmkurtzer at lbl.gov
Mon Aug 29 00:17:35 EDT 2005


On Sun, Aug 28, 2005 at 10:55:20AM -0400, Robert G. Brown wrote:
> Now one thing I'm still working on figuring out is just what warewulf
> will do when confronted by heterogenous node hardware/infrastructure
> etc.  One doesn't really want e.g. kudzu to redetect hardware on each
> reboot, for example.  Not really a warewulf issue per se, just one of
> the many things that has to be resolved setting up a default node
> configuration in an actual cluster with particular components.  I
> suspect that at that point automagic fails and one has to start to
> customize... although doubtless Tim will let us know if this is
> incorrect (I'm still learning warewulf by playing with it).  I also have
> yet to see if a single arch server (e.g. i386) can comfortably serve a
> different arch (e.g. x86_64) since I have both in my home/test/play
> cluster.  I also have some "grumbles" about its marginally inadequate
> and incomplete documentation and the lack of a yum repo tree or the
> placement of its core packages in an existing extras tree such as livna,
> but if I ever DO figure it all out and end UP fully embracing it this
> may be something I end up contributing back to the project.:-)

I can shed some light here... 

Warewulf boots an initial ram disk (wwinitrd), and uses this initial ram
disk to bootstrap the virtual node file system (VNFS) image migration to
the node. All hardware detection is done from within the wwinitrd
including setup of the network and other kernel related configurations.
I chose to have the wwinitrd do the complete hardware integration so
that it is easier to be distribution agnostic.

The wwinitrd utilizes 'detect' to do hardware detection. It is *VERY*
light weight, fast, accurate, configurable, and also used as the primary
hardware detection for cAos Linux. Yes, it is small enough to fit in
the initial ram disk. Its major drawback is that it is >= 2.6 specific,
but warewulf has another way to load the appropriate modules on < 2.6
based systems.

Supporting ia32, ia64, x86_64 and even power from one warewulf master is
defiantly doable. I know of one person in particular that did it, but
mentioned that the tricky part was maintaining the VNFS file systems.

Yes, we are aware of the lack of docs, which is why I put the entire
site into a Wiki. It is getting better... Honest. ;)

> Anyway, that's why I >>like<< warewulf as a philosophical approach (at
> least) over some of the other choices.  It divorces the support of the
> minimal "cluster" core from the choice of OS, from its natural
> update/upgrade process, and so on an maximally leverages the particular
> tools (e.g.  yum) that make managing/selecting packages easy.  The
> clusters you end up with are close to what you'd get if you rolled your
> own on top of your own distro (diskless, yet:-) but a whole lot easier
> than roll-your-own-from-scratch.  Agnostic is good.  Automagic agnostic
> is better (though harder -- requires a broad developer/participant
> base).  Tools written/maintained by folks that eat their own dog food is
> best.  warewulf looks like it on the generally correct track.

Lol, yes we eat our own "wulf" food. We now have 18 (going on 20 in
the next several months) different clusters that we maintain with
Warewulf.

As a matter of fact, we just built up a very nice 256 CPU, IB system
running at approx 83.4% efficiency with HPL. There was a very noticeable
speedup by moving from the default installed cluster distro to Warewulf
(same kernel, MPI, compilers, drivers, etc...). Install time was 2.5
hours (including 1 hour hardware debugging) to the point of us running
the acceptance tests.

Once we pass final acceptance we will post a press release.

;-)
-- 
Greg Kurtzer
Berkeley Lab, Linux guy
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list