The following is the old Beowulf FAQ. We will be updating it very soon.

Beowulf mailing list FAQ, version 2

- Notes
This FAQ is intended to forestall the repetitive questions on the
Beowulf mailing list.  Corrections welcomed.  All wrongs reversed.
Dates of the form [1999-05-13] indicate the date an entry was last
edited, not the date what it describes was last updated.
[1999-05-13]

- Table of contents:
Notes
Table of contents
Acknowledgements
Short answers (this section takes five minutes to read; please read it
		before posting!)
	1. What's a Beowulf?
	2. Where can I get the Beowulf software?
	3. Can I take my software and run it on a Beowulf and have it go faster?
	4. PVM? MPI? Huh?
	5. Is there a compiler that will automatically parallelize my code for a
		a Beowulf, like SGI's compilers?
	6. Why do people use Beowulfs?
	7. Does anyone have a database that will run faster on a Beowulf than
		on a single-node machine?
	8. Do people use keyboard-video-mouse switches?
	9. Who should I listen to and who's a bozo?
	10. Does anyone have a Linux compiler that recognizes bits of code that
		could be optimized with KNI, 3DNow!, and MMX instructions?
	11. Should I build a cluster of these 100 386s?
	12. Do I need to run Red Hat?
	13. I'm using the Extreme Linux CD . . .
	14. Does Beowulf need glibc?
	15. What compilers are there?
	16. What's the most important: CPU speed, memory speed, memory size,
		. . . what CPU should I use? . . .
	17. Can I make a Beowulf out of different kinds of machines --
	18. Where to go for more information?
	19. Is there a step by step guide to build a Beowulf?  Is there a HOWTO?

Long answers
Supplementary information and resources


- Acknowledgements

Robert G. Brown, Greg Lindahl, Forrest Hoffman, and Putchong
Uthayopas contributed valuable information to this FAQ.

Kragen Sitaker  sort of edits it and wrote some of
the answers.  It's his fault it's so disorganized and out of date.

- Short answers (please read this section before posting!)

If you want longer answers, see the long answers.

1. What's a Beowulf? [1999-05-13]

It's a kind of high-performance massively parallel computer built
primarily out of commodity hardware components, running a free-software
operating system like Linux or FreeBSD, interconnected by a private
high-speed network.  It consists of a cluster of PCs or workstations
dedicated to running high-performance computing tasks.  The nodes in
the cluster don't sit on people's desks; they are dedicated to running
cluster jobs.  It is usually connected to the outside world through
only a single node.

Some Linux clusters are built for reliability instead of speed.  These
are not Beowulfs.

2. Where can I get the Beowulf software? [1999-05-13]

There isn't a software package called "Beowulf".  There are, however,
several pieces of software many people have found useful for building
Beowulfs.  None of them are essential.  They include MPICH, LAM, PVM,
the Linux kernel, the channel-bonding patch to the Linux kernel (which
lets you 'bond' multiple Ethernet interfaces into a faster 'virtual'
Ethernet interface) and the global pid space patch for the Linux kernel
(which, as I understand it, lets you see all the processes on your
Beowulf with ps, and maybe kill etc. them), DIPC (which lets you use
sysv shared memory and semaphores and message queues transparently
across a cluster).  [Additions?  URLs?]

3. Can I take my software and run it on a Beowulf and have it go faster?
[1999-05-13]

Maybe, if you put some work into it.  You need to split it into
parallel tasks that communicate using MPI or PVM or network sockets or
SysV IPC.  Then you need to recompile it.

Or, as Greg Lindahl points out, if you just want to run the same
program a few thousand times with different input files, a shell script
will suffice.

As Christopher Bohn points out, even multi-threaded software won't
automatically get a speedup; multi-threaded software assumes
shared-memory.  There are some distributed shared memory packages under
development (DIPC, Mosix, ...), but the memory access patterns in
software written for an SMP machine could potentially result in a
*loss* of performance on a DSM machine.

4. PVM? MPI? Huh? [1999-05-13]

PVM and MPI are software systems that allow you to write
message-passing parallel programs that run on a cluster, in Fortran and
C.  PVM used to be defacto standard until MPI appeared. But PVM is
still widely used and really good.  MPI (Message Passing Interface) is
a defacto standard for portable message-passing parallel programs
standardized by the MPI Forum and available on all massively-parallel
supercomputers.

More information can be found in the PVM and MPI FAQs.

5. Is there a compiler that will automatically parallelize my code for a
Beowulf, like SGI's compilers? [1999-05-13]

No.  There is this thing called BERT from plogic.com which will help
you manually parallelize your Fortran code.  And NAG's and Portland
Group's Fortran compilers can also build parallel versions of your
Fortran code, given some hints from you (in the form of HPF and OpenMP
(?) directives).  These versions may not run any faster than the
non-parallel versions.

6. Why do people use Beowulfs? [1999-05-13]

Either because they think they're cool or because they get
supercomputer performance on some problems for a third to a tenth the
price of a traditional supercomputer.

7. Does anyone have a database that will run faster on a Beowulf than
on a single-node machine? [1999-05-13]

No.  Oracle and Informix have databases that might do this someday, but
they don't yet do it on Linux.

8. Do people use keyboard-video-mouse switches? [1999-05-13]

Most people don't because they don't need them. Since they're running
Linux, they can just telnet to any machine anyway unless it's broken.
Lots of Beowulfs don't even have video cards in every node.  Console
access is generally only needed when the box is so broken it won't
boot.

Some people use serial ports instead even for this.

9. Who should I listen to and who's a bozo? [1999-05-13]

I don't know who's a bozo.  Maybe me.  Don Becker, Walter B. Ligon,
Putchong Uthayopas, Christopher Bohn, Greg Lindahl, Doug Eadline,
Eugene Leitl, Gerry Creager, and William Rankin are generally
thoughtful and well-informed, as well as frequently willing to help.
Probably other people in this category too.

Robert G. Brown claims to be a bozo, but I don't believe him, even
though he showed me his clown face.  Rob Nelson also claims to be a
bozo, but I think he is mistaken.

10. Does anyone have a Linux compiler that recognizes bits of code that
could be optimized with KNI, 3DNow!, and MMX instructions? [1999-05-13]

No.  Well, PentiumGCC has some support for this.

11. Should I build a cluster of these 100 386s? [1999-05-13]

If it's OK with you that it'll be slower than a single Celeron-333
machine, sure.  Great way to learn.

12. Do I need to run Red Hat? [1999-05-13]

No.  Indeed, the original Beowulf ran Slackware.

13. I'm using the Extreme Linux CD . . . [1999-05-13]

Don't -- it's way out of date.

14. Does Beowulf need glibc? [1999-05-13]

No.  But if you want to run a particular application on a libc5-based
beowulf, make sure it compiles and works with libc5.  Similarly if you
want to run a particular application on a glibc-based beowulf, make
sure it compiles and works with glibc.

It is not recommended to configure different nodes differently in
software; that's a headache.

15. What compilers are there? [1999-05-13]

gcc family, Portland Group, KAI, Fujitsu, Absoft, PentiumGCC, NAG.
Compaq is about to beta AlphaLinux compilers which are reputedly
excellent, and some people already compile their applications under
Digital Unix and run them on AlphaLinux.

16. What's the most important: CPU speed, memory speed, memory size,
cache size, disk speed, disk size, or network bandwidth?  Should I use
dual-CPU machines?  Should I use Alphas, PowerPCs, ARMs, or x86s?
Should I use Xeons?  Should I use Fast Ethernet, Gigabit Ethernet, 
Myrinet, SCI, FDDI?  Should I use Ethernet switches or hubs?
[1999-05-13]

IT ALL DEPENDS ON YOUR APPLICATION!!!

Benchmark, profile, find the bottleneck, fix it, repeat.

Some people have reported that dual-CPU machines scale better than
single-CPU machines because your computation can run uninterrupted on
one CPU while the other CPU handles all the network interrupts.

17. Can I make a Beowulf out of different kinds of machines --
single-processor, dual-processor, 200MHz, 400MHz, etc.?
[1999-05-13]

Sure.  Splitting up your application optimally gets a little harder but
it's not infeasible.

18. Where to go for more information? [1999-05-13]

The "Long answers" section of this FAQ
http://beowulf.org/
http://beowulf-underground.org/
http://beowulf.gsfc.nasa.gov/  (currently the same as http://beowulf.org/)
http://www.extremelinux.org/
http://www.xtreme-machines.com/x-links.html  
The This email address is being protected from spambots. You need JavaScript enabled to view it. mailing list
The "Supplementary information and resources" section of this FAQ

19. Is there a step by step guide to build a Beowulf?  Is there a HOWTO?
[1999-05-13]

Look at: http://www.xtreme-machines.com/x-cluster-qs.html This document
will get you going.  See also the docs in the "Docs" section of the
"Supplementary information and resources" section of this FAQ.

- Long answers 

Is there a compiler that will automatically parallelize my code for
a Beowulf, like SGI's compilers? [1999-05-13]

Robert G. Brown writes:

	With a few exceptions where a tool like BERT can tell you where
	and how to parallelize or an obvious routine is called with a
	plug-in parallel version, it is highly nontrivial to
	parallelize code.  This is simply because your program isn't
	usually aware of dependencies and time orderings, and it is
	VERY difficult to make a truly reliable tool to unravel
	everything.  With a pointer-based language like C it is all but
	impossible.

	A second problem (aside from determining what in your code can
	safely be parallelized) is determining what can SANELY be
	parallelized.  Code that will run efficiently on one parallel
	architecture may run slower than single-threaded code on
	another.

	A third problem is to determine the ARRANGEMENT of your code
	that runs most efficently on whatever architecture you have
	available (beowulf, cluster, or otherwise).  Sometimes code
	that on the surface of things runs inefficiently can be
	rearranged to run efficiently.  However, this rearrangement is
	not usually obvious or intuitive to somebody who writes serial
	von Neumann code and is usually nothing at all like the
	original serial code one wishes to parallelize.

	The proper answer to your question is therefore: "No" it is not
	essential to use PVM or MPI -- one can use raw sockets on the
	"do it all yourself" end or NFS on the "all I know how to do or
	care to learn is open and write to a file" end with perhaps
	some ground in between.  However, the answer is ALSO "No" it is
	almost certainly not enough to just recompile even with the
	smartest of compilers.  The problem is too complex to fully
	automate, and the underlying serial code being parallelized may
	need complete rearrangement and not just a plug-in routine.

See http://noel.feld.cvut.cz/magi/soft.html for more.

- Supplementary information and resources

Software useful for Beowulfs:

Several pieces of software: [1999-05-13]
http://www.beowulf.org/software/software.html

PVM (Parallel Virtual Machine): [1999-05-13]
http://www.epm.ornl.gov/pvm

MPI (Message Passing Interface): [1999-05-13]
    MPICH (Argonne National Laboratory's implementation of MPI):
    http://www-unix.mcs.anl.gov/mpi/mpich/index.html

    LAM/MPI (Local Area Multicomputer MPI, developed at the Ohio
    Supercomputer Center and housed at Univ. of Notre Dame):
    http://www.mpi.nd.edu/lam/

Globus (Metacomputing Environment): [1999-05-13]
http://www.globus.org/

Compilers:

	Absoft Corp. (http://www.absoft.com/) -  these guys even
	mention Extreme Linux right on their homepage! (proprietary)
	[1999-05-13]
	    FORTRAN 77 (f77) and FORTRAN 90 (f90)

	The Portland Group (http://www.pgroup.com/) (proprietary)
	[1999-05-13]
	    High Performance FORTRAN (pghpf)
	    FORTRAN 77 (pgf77)
	    C and C++ (pgcc)

	Numerical Algorithms Group (http://www.nag.com/) (proprietary)
	[1999-05-13]
	    FORTRAN 90 (f90)
	    FORTRAN 95 (f95)

	GNU CC/egcs (http://egcs.cygnus.com/) (free Fortran-77, C,
	Pascal, and C++ compilers)
	[1999-05-13]

DQS Distributed Queueing System (a free batch queueing system) [1999-05-13]
http://www.scri.fsu.edu/~pasko/dqs.html

ASCI Option Red software:  (BLAS, fast-fourier transform, hardware
performance-monitoring utilities, extended-precision math primitives
-- all available gratis under restrictive licenses) [1999-05-13]
http://www.cs.utk.edu/~ghenry/distrib/archive.htm

BVIEW: (software for monitoring your Beowulf's health) [1999-05-13]
http://w272.gsfc.nasa.gov/~udaya/Public/software/bview/bview.html

bWatch: (more software for monitoring your Beowulf's health) [1999-05-13]
http://www.sci.usq.edu.au/staff/jacek/bWatch/

BPROC: (making processes visible across nodes, allowing fork()s to
happen across nodes, allowing process migration, allowing kill()s to
work across nodes -- currently pre-alpha) [1999-05-13]
http://www.beowulf.org/software/bproc.html

cluster patches for procps: (Lets you compile /proc-based programs like
ps so they report on all processes on the cluster, not just the ones on
the machine you're logged into.) [1999-05-13]
http://www.sc.cs.tu-bs.de/pare/results/procps.html

SMILE Cluster Management System: (Run commands on all nodes, shut down
individual nodes and sets of nodes, monitor health of nodes.  Makes
clusters easier to administer.) [1999-05-13]
http://smile.cpe.ku.ac.th/software/scms/index.html

Parallel Virtual Filesystem: (LD_PRELOAD-based filesystem modification
to let you transparently stripe big files across many disks.  Allows
high-performance access to big datasets.) [1999-05-13]
http://ece.clemson.edu/parl/pvfs/

Fast math library and Free Fast Math library: (makes standard
mathematical functions much faster) [1999-05-13]
http://people.frankfurt.netsurf.de/Joachim.Wesner/
http://www.lsc-group.phys.uwm.edu/~www/docs/beowulf/os_updates/fastMath.html

Scripts for configuring 'clone' worker nodes: (makes adding nodes to a
Beowulf painless) [1999-05-13]
ftp://ftp.sci.usq.edu.au/pub/jacek/beowulf-utils/disk-less/

Scripts for doing various things on a cluster -- backups, shutdowns,
reboots, running a command on every node: [1999-05-13]
ftp://ftp.sci.usq.edu.au/pub/jacek/beowulf-utils/misc_scripts/

BERT 77: ("an automatic and efficient FORTRAN parallelizer") [1999-05-13]
http://www.plogic.com/bert.html

Pentium gcc, aka PGCC, from the Pentium Compiler Group: (uses
Pentium-specific optimizations to produce 5%-30% speedups from regular
gcc) [1999-05-13]
http://goof.com/pcg/


Docs:

boilerplate software installation: [1999-05-13]
http://www.phy.duke.edu/brahma/#boilerplate

Beowulf HOWTO: [1999-05-13]
http://www.sci.usq.edu.au/staff/jacek/beowulf/BDP/HOWTO/

more boilerplate software installation: [1999-05-13]
http://www.lsc-group.phys.uwm.edu/~www/docs/beowulf/Proto-slave/autoinstall.html
http://www.lsc-group.phys.uwm.edu/~www/docs/beowulf/Slave/Slave_build.html

A little more information on "how to build a Beowulf": [1999-05-13]
http://beowulf.gsfc.nasa.gov/howto/howto.html

Yet another Beowulf Installation Howto: [1999-05-13]
http://lcdx00.wm.lc.ehu.es/~svet/beowulf/howto.html

Building a Beowulf System: [1999-05-13]
http://www.cacr.caltech.edu/research/beowulf/tutorial/beosoft/

How to Build a Beowulf-Class Cluster (slides): [1999-05-13]
http://smile.cpe.ku.ac.th/smile/beotalk/index.htm
(partly in Thai)

How to Build a Beowulf: An Electronic Book (slides): [1999-05-13]
http://smile.cpe.ku.ac.th/beowulf/index.html

Beowulf Installation and Administration HOWTO: [1999-05-13]
http://www.sci.usq.edu.au/staff/jacek/beowulf/BDP/BIAA-HOWTO/


Books:
How to build a Beowulf, from MIT Press [1999-05-13]

(END OF FAQ)	

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner

Who's Online

We have 706 guests and no members online

HPCWire

Worldwide Front Page Visits


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.