0
votes

I saw this on internet

MPI_Scatter takes an array of elements and distributes the elements in the order of process rank.

But I could not find it on the documentation.

I have an array and there are 4 processes. One process is root -> will scatter data among other 3 processes. The id-s are 0, 1, 2, 3.

Question: Will MPI_Scatter() or MPI_Scatterv() send the data in order, guaranteed?

Example 1:

0: [a, b, c, d, e]

// after scatter

1: [a, b]
2: [c, d]
3: [e]

Example 2:

0: [a, b]

// after scatter

1: [a]
2: [b]
3: [ ]

Also, does gather do the same thing? (preserve order)

1
MPI_Scatter() scatter the data to all tasks including itself. In your case, you need the MPI_Scatterv() variant since the send buffer size is not a multiple of the communicator size. - Gilles Gouaillardet
@GillesGouaillardet Thanks for the info. But still, will MPI_Scatterv() send data in order of proc rank? - DonJoe
@donjoe - with scatterv, the order is under your control. - Jonathan Dursi
@JonathanDursi And, since I have to control the order, do you think it's more efficient to use scatterv or manual send / recv ? (for similar examples to my question) - DonJoe
Scatterv will normally be more efficient since the implementation can use more efficient algorithms than a linear loop of sends. - Jonathan Dursi

1 Answers

1
votes

The order is guaranteed according to the rank in the MPI_Comm.

Below statement is copy pasted from open-mpi v2.1 documentation:

An alternative description is that the root sends a message with MPI_Send(sendbuf, sendcount * n, sendtype, ...). This message is split into n equal segments, the ith segment is sent to the ith process in the group, and each process receives this message as above. The send buffer is ignored for all nonroot processes.