Problems running MPI

german kogan gkogan at students.uiuc.edu
Thu Aug 23 21:16:48 EDT 2001


Thanks for the reply. I just tried it but I am still getting the same
error. Here is what I did. I had two slave nodes up. I turned down the
slave nodes through beosetup. Then I inserted the 0 off node and turned it
down. Then I deleted the two slave nodes from in the beosetup and
reinstalled them from the beginning. So I had a 0 off node and two slave
nodes 1 and 2. I tried running a simple MPI program the same as I did
before and I got the same error message. Any ideas?

Thanks



On Fri, 10 Aug 2001, Sean Dilda wrote:

> On Mon, 06 Aug 2001, german kogan wrote:
>
> >
> >
> > I installed Scyld on my master node and got one test slave node up. I was
> > trying to run a simple MPI program. And when I tried to run it
> > using the command /usr/mpi-beowulf/bin/mpirun -np 2 a.out I got the
> > following error message "p0_2813: p4_error: net_create_slave: host not
> > bproc node: -3 p4_error: latest msg from perror: Success". However, it did
> > work when I used 1 process instead of 2. Any ideas of  what might be the
> > problem be?
>
> I think I finally figured out what might be causing this.  beompi (the
> MPI implementation shipped in 27bz-6 and 27bz-7) liked to always put the
> first slave node job on node 1, then keep counting up.  You don't have a
> node 1, so it is having a problem sending the job to node 1.  As a
> solution, turn all your nodes off, then go into beosetup and insert an
> 'off' node as node 0 and make your real node be node 1 and see if that
> solves the problem.
>
> We made the mpich in 27cz-8 actually be smart about putting a job on the
> least busy node, so this problem should be gone in that release.
>


_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list