1
votes

I am new to MPI , and my question is how the root(for example rank-0) initializes all its values (in the array) before other processes receive their i'th value from the root? for example: in the root i initialize: arr[0]=20,arr[1]=90,arr[2]=80.

My question is ,If i have for example process (number -2) that starts a little bit before the root process. Can the MPI_Scatter sends incorrect value instead 80?

How can i assure the root initialize all his memory before others use Scatter ?

Thank you !

2

2 Answers

1
votes

The MPI standard specifies that

If comm is an intracommunicator, the outcome is as if the root executed n send operations, MPI_Send(sendbuf+i, sendcount, extent(sendtype), sendcount, sendtype, i,...), and each process executed a receive, MPI_Recv(recvbuf, recvcount, recvtype, i,...).

This means that all the non-root processes will wait until their recvcount respective elements have been transmitted. This is also known as synchronized routine (the process waits until the communication is completed).

You as the programmer are responsible of ensuring that the data being sent is correct by the time you call any communication routine and until the send buffer available again (in this case, until MPI_Scatter returns). In a MPI only program, this is as simple as placing the initialization code before the call to MPI_Scatter, as each process executes the program sequentially.

The following is an example based in the document's Example 5.11:

MPI_Comm comm = MPI_COMM_WORLD;
int grank, gsize,*sendbuf;
int root, rbuf[100];

MPI_Comm_rank( comm, &grank );    
MPI_Comm_size(comm, &gsize);

root = 0;
if( grank == root ) {
   sendbuf = (int *)malloc(gsize*100*sizeof(int));
   // Initialize sendbuf. None of its values are valid at this point.
   for( int i = 0; i < gsize * 100; i++ )
      sendbuf[i] = i;
}
rbuf = (int *)malloc(100*sizeof(int));
// Distribute sendbuf data
// At the root process, all sendbuf values are valid
// In non-root processes, sendbuf argument is ignored.
MPI_Scatter(sendbuf, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
0
votes

MPI_Scatter() is a collective operation, so the MPI library does take care of everything, and the outcome of a collective operation does not depend on which rank called earlier than an other.

In this specific case, a non root rank will block (at least) until the root rank calls MPI_Scatter().

This is no different than a MPI_Send() / MPI_Recv(). MPI_Recv() blocks if called before the remote peer MPI_Send() a matching message.