2
votes

I am using Cartesian Topology in an MPI program. Now I want to collect information about a variable (let's call it 'state') on each processor at the end of the program and print the results on the screen (from the root process). Normally I would do MPI_Gather but how can I do it in "Cartesian style"?

2
You use MPI_Gather. There is no such thing such as "Cartesian style" in MPI unless you mean the neighbour gather operation introduced in MPI-3.0, which allows each process to perform gather only from its neighbours in the Cartesian topology.Hristo Iliev
Ok, and is there any non-blocking MPI_Gather in MPI? I read about MPI_IGather but I guess it was introduced in the next versions...tomomomo
What do you want to do? Do you want to print the information in a 2D matrix on the screen where each value corresponds to its correct value in the grid?Chiel
Yes, there is MPI_Igather in MPI-3.0.Hristo Iliev

2 Answers

1
votes

The original paper that proposed neighborhood collectives provided simple example implementations for both blocking and non-blocking neighborhood alltoall operations that you could use as an example to spin your own neighborhood gather even if you're running on an MPI implementation that doesn't yet support all the 3.0 features yet:

Hoeffler and Traff: "Sparse Collective Operations for MPI"

MPICH recently added support as well. You can take a look at their implementation, but there's a lot of extraneous stuff in their code to support error handling, thread safety and such.

1
votes

If anyone is still interested, I decided to go with a non-blocking Receive followed by Waitall:

if (rank == 0) {
        for (i=0; i<m*n; i++) {
            MPI_Irecv(&states[i], 10, MPI_CHAR, MPI_ANY_SOURCE, final_tag, MPI_COMM_WORLD, &report_requests[i]); // Receive final states from all processes
        }
    }   
    // Further computation

if (rank == 0) {
        MPI_Waitall(m*n, report_requests, MPI_STATUSES_IGNORE); // Wait for state reports from all processes

// Process

That did the trick for me :)