1
votes

I have been trying to create a star topology using MPI_Comm_split but I seem to have and issue when I try to establish the links withing all processes. The processes are allo expected to link to p0 of MPI_COMM_WORLD . Problem is I get a crash in the line

error=MPI_Intercomm_create(  MPI_COMM_WORLD, 0, NEW_COMM, 0 ,create_tag, &INTERCOMM );

The error is : MPI_ERR_COMM: invalid communicator .

I have and idea of the cause though I don't know how to fix it . It seems this is due to a call by process zero which doesn't belong to the new communicator(NEW_COMM) . I have tried to put an if statement to stop execution of this line if process = 0, but this again fails since its a collective call.

Any suggestions would be appreciated .

#include <iostream>
#include "mpi.h"

using namespace std;


int main(){

 MPI_Comm NEW_COMM , INTERCOMM;
 MPI_Init(NULL,NULL);
 int world_rank , world_size,new_size, error; 

 error = MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
 error = MPI_Comm_size(MPI_COMM_WORLD,&world_size);

 int color = MPI_UNDEFINED;
 if ( world_rank > 0 )
     color = world_rank ;

 error = MPI_Comm_split(MPI_COMM_WORLD, color , world_rank, &NEW_COMM);

 int new_rank;
 if ( world_rank > 0 ) {
      error = MPI_Comm_rank( NEW_COMM , &new_rank);
      error = MPI_Comm_size(NEW_COMM, &new_size);
  }
  int create_tag = 99;

  error=MPI_Intercomm_create(  MPI_COMM_WORLD, 0, NEW_COMM, 0 ,create_tag, &INTERCOMM );

 if ( world_rank > 0 )
    cout<<" My Rank in WORLD  = "<< world_rank <<"  New rank = "<<new_rank << " size of NEWCOMM = "<<new_size  <<endl;
 else
    cout<<" Am centre "<<endl;


   MPI_Finalize();

   return 0;


}
1
I could be wrong, but I believe that MPI_Intercomm_create takes in two intracommunicators as inputs. As is, you're giving it MPI_COMM_WORLD and NEW_COMM; one communicator is a subset of the other. I believe you should create a second communicator that is just the world_rank==0 process: ROOT_COMM. Then calling MPI_Intercomm_create(ROOT_COMM, 0, NEW_COMM, 0, create_tag, &INTERCOMM); might work as expected.NoseKnowsAll

1 Answers

1
votes

What about using a MPI topology rather? Something like this:

#include <mpi.h>
#include <iostream>

int main( int argc, char *argv[] ) {
    MPI_Init( &argc, &argv );
    int rank, size;

    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    MPI_Comm_size( MPI_COMM_WORLD, &size );

    int indegree, outdegree, *sources, *sourceweights, *destinations, *destweights;

    if ( rank == 0 ) { //centre of the star
        indegree = outdegree = size - 1;
        sources = new int[size - 1];
        sourceweights = new int[size - 1];
        destinations = new int[size - 1];
        destweights = new int[size - 1];
        for ( int i = 0; i < size - 1; i++ ) {
            sources[i] = destinations[i] = i + 1;
            sourceweights[i] = destweights[i] = 1;
        }
    }
    else { // tips of the star
        indegree = outdegree =  1;
        sources = new int[1];
        sourceweights = new int[1];
        destinations = new int[1];
        destweights = new int[1];
        sources[0] = destinations[0] = 0;
        sourceweights[0] = destweights[0] = 1;
    }

    MPI_Comm star;
    MPI_Dist_graph_create_adjacent( MPI_COMM_WORLD, indegree, sources, sourceweights,
                                    outdegree, destinations, destweights, MPI_INFO_NULL,
                                    true, &star );
    delete[] sources;
    delete[] sourceweights;
    delete[] destinations;
    delete[] destweights;

    int starrank;

    MPI_Comm_rank( star, &starrank );

    std::cout << "Process #" << rank << " of MPI_COMM_WORLD is process #" << starrank << " of the star\n";

    MPI_Comm_free( &star);

    MPI_Finalize();

    return 0;
}

Is that the sort of thing you were after? If not, what is your communicator for?


EDIT: Explanation about MPI topologies

I wanted to clarify that, even if this graph communicator is presented as such, it is no different to MPI_COMM_WORLD in most aspects. Notably, it comprises the whole set of MPI processes initially present in MPI_COMM_WORLD. Indeed, although its star shape has been defined and we didn't represent any link between process #1 and process #2 for example, nothing prevents you from making a point to point communication between these two processes. Simply, by defining this graph topology, you give an indication of the sort of communication pattern your code will expose. Then you ask the library to try to reorder the ranks on the physical nodes, for coming up with a possibly better match between the physical layout of your machine / network, and the needs you express. This can be done internally by an algorithm minimising a cost function using a simulated annealing method for example, but this is costly. Moreover, this supposes that the actual layout of the network is available somewhere for the library to use (which isn't the case most of the time). So at the end of the day, most of the time, this placement optimisation phase is just ignored and you end-up with the same indexes as the ones you entered... Only do I know about some meshed / torus shaped network-based machines to actually perform the placement phase for MPI_Car_create(), but maybe I'm outdated on that.

Anyway, the bottom line is that I understand you want to play with communicators for learning, but don't expect too much out of them. The best thing to learn here is how to get the ones you want in the least and simplest possible calls, which is I hope what I proposed.