[Beowulf] problem with execution of cpi in two node cluster

Vinodh gvinodh1980 at yahoo.co.in
Wed Jan 12 02:27:08 EST 2005


hi,
     i setup a two node cluster with mpich2-1.0.

the name of the master node is aarya
the name of the slave node is desktop2

i enabled the passwordless ssh session.

in the mpd.hosts, i included the name of both nodes.

the command, mpdboot -n 2 works fine.

the command, mpdtrace gives the name of both machines.

i copied the example program cpi on /home/vinodh/ on
both the nodes.

mpiexec -n 2 cpi gives the output,

Process 0 of 2 is on aarya
Process 1 of 2 is on desktop2
aborting job:
Fatal error in MPI_Bcast: Other MPI error, error
stack:
MPI_Bcast(821): MPI_Bcast(buf=0xbfffbf28, count=1,
MPI_INT, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast(229):
MPIC_Send(48):
MPIC_Wait(308):
MPIDI_CH3_Progress_wait(207): an error occurred while
handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(1053):
[ch3:sock] failed to connnect to remote process
kvs_aarya_40892_0:1
MPIDU_Socki_handle_connect(767): connection failure
(set=0,sock=1,errno=113:No route to host)
rank 0 in job 1  aarya_40878   caused collective abort
of all ranks
  exit status of rank 0: return code 13


but, the other example hellow works fine.

let me know, why theres an error for the program cpi.

Regards,
G. Vinodh Kumar


		
__________________________________ 
Do you Yahoo!? 
Yahoo! Mail - Helps protect you from nasty viruses. 
http://promotions.yahoo.com/new_mail
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list