mpirun + Scyld MPI

Donald Becker becker at
Fri Nov 14 01:11:20 EST 2003

On Wed, 12 Nov 2003, Zukaitis, Anthony wrote:

> I am currently using MPI distributed with scyld which I believe is

Which version of Scyld?

> I have 6 dual CPU nodes for a total of 12 cpu's.  When ever I try to use 12
> processors it puts 3 processes on one of the nodes and only one process on
> the master node.

That's the preferred behavior, and thus the default.  The initial
single process, which will become MPI Rank 0, is on the master.  The 
initialization and scheduling is done single threaded.  Additional
processes are created when MPI_Init() is called.

An alternate behavior is putting all processes on compute nodes.  This
leaves the master free to manage the jobs.  Rank 0 will be on a
compute node and thus may not have access to the full set of file
systems and scheduling information.

> I have tried using a machinefile like
> master:2
> .0:2
> .1:2
> .2:2
> .3:2
> .4:2

Using a 'machinefile' is old-fashioned and inflexible.
Read the 'beomap' section in the manual for details on the many
scheduling options available with Scyld.  I'm guessing that you want the
control of specifying an explicit job map with the environment variable
or command-line option:

  --map <list>			BEOWULF_JOB_MAP
     Use the colon-delimited list to specify which nodes to run on.

It's also possible for the application to influence or specify a process
mapping, or for the administrator to install a alternate scheduler as a
dynamic library.

Donald Becker				becker at
Scyld Computing Corporation
914 Bay Ridge Road, Suite 220		Scyld Beowulf cluster system
Annapolis MD 21403			410-990-9993

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list