[Beowulf] MPI synchronization problem

Geoff Jacobs gdjacobs at gmail.com
Sun Nov 12 21:59:39 EST 2006

Michel Dierks wrote:
> Geoff Jacobs wrote:
>> Michel Dierks wrote:
>>> Hello,
>>> I'm developing an application to calculate the Mandelbrot fractal.
>> Hmm... I never did this when I was learning the ropes. There is a pretty
>> example included with the mpich source code to do Mandelbrot fractals if
>> you need something to crib from.
>>> My problem is:
>>> from the master I will send to each free node for computation a message
>>> containing the values of one screen line.
>> I would think you can get away with transmitting just the boundaries of
>> the section of complex plane which you will be plotting, as well as the
>> iterative limit, color interpretation, etc. Slaves can determine what
>> portion they must calculate from their ranks.
>>> After a time the nodes will send back the results.
>>> I have see that I can use MPI_Isend and MPI_Irecv for non-blocking
>>> communication. This give me no problem if we are talking about of one
>>> send and one receive. But how can I handle the sending from each node to
>>> the master without  data corruption.
>>> Must I implement a send and a receive buffer for each node (16 nodes
>>> means 16 buffer in and 16 out) ?
>> The software I've implemented has tended to be rather dynamic, so it
>> seemed easier to use discrete sends and receives. In your case, you will
>> be calculating a vector of fixed size on each slave (one row for your
>> resultant image per). It would be logical to use a collective
>> communicator like MPI_Gather to automatically populate the array
>> representing your raster image.
>> http://csit1cwe.fsu.edu/extra_link/pe/d3a64mst07.html#HDRIGATH
>>> Can someone help me? Please
>>> _______________________________________________
>>> Beowulf mailing list, Beowulf at beowulf.org
>>> To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>   Hello,
>>   it is now 2 days that I'm facing with the following problem.
>>   The master send messages to nodes (about 200 separate messages for
>> each node).
>>   I cannot group them, I must send messages after messages to each
>> nodes and to response to Geoff Jacobs I cannot use MPI_Gather because
>> I must determined on which node I will send the next message ( based
>> on list of free nodes).
>>   My send is running correctly and my receive too for the first message
>> but not for the second. I can see that the master send the second
>> message and that MPI_Iprobe  on the node see that a message is arrived.
>> But after this MPI_Iprobe , MPI_IRecv doesn't work this time. Why ?? I
>> have made some research on the mpi forum and on some other places but I
>> cannot found a correct explanation. All example given are more talking
>> about one send and one receive but not on multiple send and receive on
>> more than one node. I have found the routine MPI_Request_free and test
>> it but this one give error telling me that I have an invalid handler. 
>> After some research again, I have found this at
>> http://www.pdc.kth.se/training/Talks/MPI/Persistent/more.html Point 2
>> on the page:
>> "When a program calls a non-blocking message-passing routine such as
>> |MPI_Isend|, a request object is created, and then the
>> communication is started.  These steps are equivalent to two other MPI
>> calls, |MPI_Send_init| and |MPI_Start|.  When
>> the program calls |MPI_Wait|, it waits until all necessary
>> local operations have completed, and then frees the memory used to
>> store the request object.  This second step equals a call to
>> |MPI_Request_free|."
>>>  So I don't understand anymore what to do.
>>> Can someone of you tell me clearly what I'm doing wrong? Here below
>>> the part of my code who run on the nodes:
>>>      /*****************************
>>>      *  Parts running on node     *
>>>      *****************************/                /* Memory
>>> Allocation to buffer_sendi and buffer_recvd */               if
>>> ((buffer_sendi=(int *)malloc(MAX_INT * sizeof(int)))==NULL)
>>>         {
>>>             exit(1);
>>>         }
>>>         if ((buffer_recvd=(double *)malloc(MAX_DOUBLE *
>>> sizeof(double)))==NULL)
>>>         {
>>>             exit(1);
>>>         }
>>>         /* Until receive tag with value -99 loop */
>>>                    do         {               
>>> MPI_Iprobe(0,MPI_ANY_TAG,MPI_COMM_WORLD,&flag,&status_recvd);
>>>                       if (flag==1)
>>>             {
>>>             /* Reception from master */
>>> MPI_IRecv(buffer_recvd,MAX_DOUBLE,MPI_DOUBLE,0,tag,MPI_COMM_WORLD,&status_recvd);

>>> MPI_Wait(&request_recvd,&status);
MPI_ISend and MPI_IRecv both take a request argument, and both do not
take a status argument. The arguments for async comms are different than
for sync.

Tag, in the case of the MPI_IRecv should be status_recvd.MPI_TAG, not tag.

>>         /*
>>                 some calculation
>>                */
>>                         /* send  buffer_sendi  to master */
>> MPI_ISend(buffer_sendi,MAX_INT,MPI_INT,0,tag,MPI_COMM_WORLD);
Here, tag can be anything. You do need a request variable, as noted above.

>>         MPI_Wait(&request_recvd,&status);
>>             }
>>                    }while(buffer_recvd[0]!=-99.00);
>> Thanks for your help.

Geoffrey D. Jacobs

Go to the Chinese Restaurant,
Order the Special
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list