4
votes

I parallelized a Fortran code using MPI. At the share points, I send all the data out to the buffer using MPI_Isend from all processes in the model. Then, each process goes and collects the data it needs using MPI_Recv. Since MPI_Recv is blocking, I know that each process is getting the data it needs before moving on with its calculations. Therefore, I just ignored the request code that MPI_Isend gives to me. I set it to some integer that I do not retain. I never call MPI_Wait. When I run my code, I notice that it is gobbling up more memory at every iteration and I'm wondering if it's because I'm not calling MPI_Wait since, in the documentation, MPI_Wait says:

If the communication object associated with this request was created by a nonblocking send or receive call, then the object is deallocated by the call to MPI_WAIT and the request handle is set to MPI_REQUEST_NULL.

Do you think this is why my program is eating more memory throughout the run?

1
Thank you. I guess I need to come up with a system for storing/retrieving the request codes now.rks171

1 Answers

4
votes

An MPI_Request associated with MPI communication functions such as your MPI_ISend will be allocated memory, and must be cleaned up throough MPI (as opposed to delete).

The memory will not be given back until one of 3 things happens:

  • A wait such as MPI_Wait completes on the request, freeing it.
  • An MPI_Test returns success, also freeing it.
  • The request is freed using MPI_Request_free.

It is possible to free an active request (i.e. an MPI_Request for a message that is not finished transmission), which will continue to send but the MPI_Request will no longer be valid for any uses such as testing, waiting etc.