0
votes

I am coding a parallel algorithm and I have an issue with non-blocking communication. I model my problem by this following code :

int main( int argc, char* argv[] ) {
    MPI_Init(&argc, &argv);
    int rank, p;
    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    MPI_Comm_size( MPI_COMM_WORLD, &p );

    int a,b, i, j;
    int maxNumber = 8192;

    int(*tab)[maxNumber] = malloc(sizeof(int[maxNumber + 1][maxNumber + 1]));
    MPI_Request* r = malloc(sizeof * r);

    if(rank == 0){
        for(i = 0; i < maxNumber + 1; i++){
            for(j = 0; j < maxNumber + 1; j++){
                tab[i][j] = 2*i+i*j;
            }
            for(a = 1; a < p; a++){
                MPI_Isend(&tab[i], maxNumber + 1, MPI_INT, a, i, MPI_COMM_WORLD, r);
                printf("Process 0 send the block %d to process %d\n", i, a);
            }
        }

    }
    else{
        for(i = 1; i < p; i++){
            if(rank == i){
                for(j = 0; j < maxNumber + 1; j++){
                    printf("Process %d wait the block %d to process 0\n", i, j);
                    MPI_Recv(&tab[j], maxNumber + 1, MPI_INT, 0, j, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
                    printf("Process %d receive the block %d to process 0\n", i, j);
                }
            }           

        }
    }

    MPI_Finalize();

    return 0;
}

The processor 0 sends each row of a matrix of size 8192 * 8192 to other processors after some computations. The problem is that the processor 0 finishes to send 8192 rows before those other processors received data.

This is one part of the output:

...
...
Process 0 send the block 8187 to process 1
Process 0 send the block 8188 to process 1
Process 0 send the block 8189 to process 1
Process 0 send the block 8190 to process 1
Process 0 send the block 8191 to process 1
Process 0 send the block 8192 to process 1
Process 1 receive the block 5 to process 0
Process 1 wait the block 6 to process 0
Process 1 receive the block 6 to process 0
Process 1 wait the block 7 to process 0
Process 1 receive the block 7 to process 0
Process 1 wait the block 8 to process 0
Process 1 receive the block 8 to process 0
Process 1 wait the block 9 to process 0
Process 1 receive the block 9 to process 0
...
...

PS: The communication must be non-blocking for the send because, in my problem, the process 0 makes computation in O(n²/p²) in each iteration before sending it to other processors in order them to begin its computations as soon as possible.

Please do you know what can I do for solving this issue?

1
1/ reusing the same MPI_Request for all calls isn't the best idea ever 2/ why not using a collective like MPI_Bcast() or MPI_Scatter() and 3/ if the non-blocking aspect is truly important, then go for the non-blocking collective MPI_Ibcast() or MPI_Iscatter()... - Gilles
1/ You are right! But when I use an array of request for each communication and it is not solved the problem 2 and 3/ I cannot use the MPI_Bcast() or MPI_Scatter() because in a communication round I don't send data to all the processors. So if I have 5 processors, the processor 0 can send data in the first iteration to processors 2 and 4, and in the second iteration the processor 2 can send data to processors 1 and 3, and so on - Compiii
You do need to MPI_Waitall() at some point in time. Also for(i...) if (i == ...) can be simplified. - Gilles Gouaillardet

1 Answers

0
votes

Thank you @Gilles for your answers. It allows me to solve my problem. I needed to use MPI_Ibsend for allocating the required amount of buffer space into which data can be copied until it is delivered.

int main( int argc, char* argv[] ) {   
    MPI_Init(&argc, &argv);
    int rank, p;
    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    MPI_Comm_size( MPI_COMM_WORLD, &p );

    int a, i, j;
    int maxNumber = atoi(argv[1]);

    int(*tab)[maxNumber] = malloc(sizeof(int[maxNumber + 1][maxNumber + 1]));
    MPI_Request* tabReq = malloc(maxNumber * sizeof * tabReq);

    int bufsize = maxNumber * maxNumber; 
    char *buf = malloc( bufsize ); 

    if(rank == 0){
        for(i = 0; i < maxNumber + 1; i++){
            for(j = 0; j < maxNumber + 1; j++){
                tab[i][j] = 2*i+i*j;
            }
            for(a = 1; a < p; a++){
                MPI_Buffer_attach( buf, bufsize );
                MPI_Ibsend(&tab[i], maxNumber + 1, MPI_INT, a, i, MPI_COMM_WORLD, &tabReq[i]);
                MPI_Buffer_detach( &buf, &bufsize );
                printf("Process 0 send the block %d to process %d\n", i, a);
            }
        }
    }
    else{
        for(j = 0; j < maxNumber + 1; j++){
            printf("Process %d wait the block %d to process 0\n", rank, j);
            MPI_Recv(&tab[j], maxNumber + 1, MPI_INT, 0, j, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            printf("Process %d receive the block %d to process 0\n", rank, j);
        }
    }

    MPI_Finalize();

    return 0;
}

This is one part of the output:

...
...
Process 1 wait the block 8186 to process 0
Process 0 send the block 8185 to process 1
Process 1 receive the block 8186 to process 0
Process 1 wait the block 8187 to process 0
Process 0 send the block 8186 to process 1
Process 1 receive the block 8187 to process 0
Process 1 wait the block 8188 to process 0
Process 0 send the block 8187 to process 1
Process 1 receive the block 8188 to process 0
Process 1 wait the block 8189 to process 0
Process 0 send the block 8188 to process 1
Process 1 receive the block 8189 to process 0
Process 1 wait the block 8190 to process 0
Process 0 send the block 8189 to process 1
Process 1 receive the block 8190 to process 0
Process 1 wait the block 8191 to process 0
Process 0 send the block 8190 to process 1
Process 1 receive the block 8191 to process 0
Process 1 wait the block 8192 to process 0
Process 0 send the block 8191 to process 1
Process 1 receive the block 8192 to process 0
Process 0 send the block 8192 to process 1