0
votes

I am trying to use MPI_Gather to recover data from slave. So basically, a simulation are running on each slave (wich is not the same on each), and I want to recover one integer on the master (the results of the simulation). From each integer, I calculate a new value 'a' on the master that I send back to the slave to redo a new simulation with this better parameter. I hope is is clear, I am pretty new to MPI.

Note: Some simulation will not finish at the same time.

int main
while(true){
if (rank==0) runMaster();
else runSlave();
}

runMaster()
receive data b of all slave (with MPI_gather)
calculate parameter a for each slave; aTotal=[a_1,...,a_n]
MPI_Scatter(aTotal, to slave)

runSlave()
a=aTotal[rank]
simulationRun(a){return b}
MPI_Gather(&b, to master)

To avoid the deadlock, each slave is initialized with a random a.

created a small test case, because I don't see how I can use MPI_Gather in my slave:

int main (int argc, char *argv[]) {
int size;
int rank;
int a[12];
int i;
int start,end;
int b;


MPI_Init(&argc, &argv);
MPI_Status status;
MPI_Request req;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int* bb= new int[size];
int source;

//master
if(!rank){
    while(true){
        b=12;
        MPI_Recv(&bb[0], 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &status);
        source = status.MPI_SOURCE;
        printf("master receive b %d from source %d \n", bb[0], source);
        if (source == 1) goto finish;
    }
}

//slave
if(rank){·
    b=13if (rank==1) {b=15;  sleep(2);}
    int source = rank;
    printf("slave %d will send b %d \n", source, b);
    // MPI_Gather(&b,1,MPI_INT,bb,1,MPI_INT,0,MPI_COMM_WORLD); // unworking, not called by master
    MPI_Send(&b, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
}
finish:
MPI_Finalize();
return 0;
}

I am trying to send the slave data to the master with a collective command.

Is this implementation realistic?

1
You should post some source code.Vladimir F
I edited my first post.superours

1 Answers

0
votes

What you propose sounds reasonable. Another approach would be to have the slaves all broadcast their results to each other in one go (MPI_AllGather), then you can implement the scoring and what-to-try-next algorithm directly in each slave. If the scoring algorithm is not too complex, the overhead of running it in every slave will be worth it in terms of speed because the slaves will not have to communicate with the master at all, saving one communication on each iteration.