0
votes

To generalize my question, Let's say I have a small cluster consist of 9 nodes that are aligned in a 3*3 matrix:

6 7 8
3 4 5
0 1 2

And I was trying to several "local" communicator (MPI_Comm) that include:

  1. the rank of current node and
  2. those of the adjacent nodes.

I tried to split nodes from MPI_Comm_World and create new communicator for each node but failed to use the new communicator since it seems to include ranks not for the current node.

So here is my question, is it possible to use only one variable for all nodes, say, local_comm that contains different ranks for each node. Or I have to use different member variables like below

MPI_Comm local_comm_0 = {0, 1, 3};
MPI_Comm local_comm_4 = {1, 3, 4, 5, 7};
MPI_Comm local_comm_7 = {4, 6, 7, 8};

etc...

Thanks in advance.

1
This looks suspiciously like an attempt to write code to do what cartesian communicators already provide. If MPI_CART_CREATE isn't what you want, explain further.High Performance Mark
MPI_CART_CREATE is what I need, thanks for the hint.Gnavvy

1 Answers

2
votes

You can have one variable with the same name on all nodes but you probably don't want to. One node will have a communicator that contains a different set of nodes to each of its neighbours. In your example, nodes 4 and 7 are neighbours but have different sets of nodes in their communicators. This is going to cause headaches.

A better idea (though it depends on exactly what you're doing) would be to use MPI_Cart_create to define the matrix of the processors, then use normal Sends and Recvs (or ISends and IRecvs) to do the communication. There's an MPI_Cart_create example here: http://mpi.deino.net/mpi_functions/MPI_Cart_create.html

If the sets of nodes were totally disjoint (like {0,3,6}, {1,4,7}, {2,5,9}) then I'd suggest creating different communicators and giving them the same variable name. But I don't think that's going to be the right solution for your problem.