[Beowulf] beowulf and X

Robert G. Brown rgb at phy.duke.edu
Wed Dec 10 07:25:50 EST 2003

On Tue, 9 Dec 2003, mark kandianis wrote:

> quite honestly if mosix can do it, it seems that xfree86 is already there,
> so it looks like my question is moot. so i think i can get this up quicker 
> than
> i thought.
> are there any particular kernels that are geared to beowulf?  or is this 
> something
> that one has to roll their own?

Hmmm, it looks like you really need a general introduction to the
subject.  Mosix may or may not be the most desireable way to proceed, as
it is quite "expensive" in terms of overhead and requires a custom
(patched) kernel.  It is also not exactly a GPL product, although it is
free and open source.  If you like, its "fork and forget" design
requires all I/O channels of any sort to be transparently encapsulated
and forwarded over TCP sockets to the master host where the jobs are
begun.  For something with little, rare I/O this is fine -- Mosix then
becomes a sort of distributed interface to a standard Linux scheduler
with a moderate degree of load balancing over the network.  For
something that opens lots of files or pipes and does a lot of writing to
them, it can clog up your network and kernel somewhat faster than an
actual parallel program where you can control e.g. data collection
patterns and avoid collisions and reduce the overhead of encapsulation.

If you're talking only a "small" cluster -- < 64 nodes, maybe < 32 nodes
(it depends on the I/O load of your application) -- you have a decent
chance of not getting into trouble with scaling, but you should
definitely experiment.  If you're wanting to run on hundreds of nodes,
I'd be concerned that you'll only be able to use ten, or thirty, or
forty seven, before your application scaling craps out -- all the other
nodes are then potentially "wasted".

There are quite a few resources for cluster beginners out there, many of
them linked to:


(so I won't bother detailing URL's to them all here).  Links and
resources on this site include papers and talks, an online book
(perennially unfinished, but still mostly complete and even
sorta-current:-) on cluster engineering, links to the FAQ, HOWTO, the
Beowulf Underground, turnkey vendor/cluster consultants, useful
hardware, networking stuff -- I've tried to make it a resource
clearinghouse although even so it is far from complete and gets out of
date if I blink.

Finally, I'd urge you to subscribe to the new Cluster Magazine (plug
plug, hint hint) which has articles that will undoubtedly help you out
with all sorts of things over the next twelve months.  I just got my
first issue, and its articles are being written by really smart people
on this list (and a few bozos -- sorry, OLD joke:-) and should be very,
very helpful to people trying to engineer their first cluster or their
fifteenth.  Besides, you get three free trial issues if you sign up now
and live in the US.

Best of luck, and to get even MORE help, describe your actual problem in
more detail.  Possibly after reading about parallel scaling and Amdahl's


Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list