[BProc] Master node in Clustermatic (BProc)

Yung-Sheng Tang jeff at hoolan.org
Thu Nov 28 05:46:00 EST 2002

On 28 Nov 2002, Ana Bosque wrote:

> On Thu, 2002-11-28 at 02:30, Yung-Sheng Tang wrote:
> > Does any body have experience on Clustermatic or other BProc-based
> > single system image distribution? I can't dispatch MPI jobs to my
> > master (head) node because it doesn't exist in bproc_nodelist().
> Try with -1. The number -1 is the master node. I tried it and it worked
> finely.

I can use "env NODES=-1,0 mpirun --p4 -np 2 ./hello++" to directly
inform mpirun to dispatch hello++ to master node and node 0, but with
"env NODES=-1,0 mpirun --gm -np 2 ./hello++" it will fail because master
node doesn't appear in the node list so GM ID of master node can not be
found, mpich-gm that Clustermatic supplied needs GM ID to establish
communication, not IP address. I have fast ethernet and myrinet
interface each in my cluster nodes, master and slaves. Anyway, thank you
for your reply.

Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list