0
votes

I write simple hello-world program on Visual c++ 2010 express with MPI library and cant understand, why my code not working.

MPI_Init( NULL, NULL );
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);

int a, b = 5;
MPI_Status st;

MPI_Send( &b, 1, MPI_INT, 0,0, MPI_COMM_WORLD );
MPI_Recv( &a, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &st );

MPI_Send tells me "DEADLOCK: attempting to send a message to the local process without a prior matching receive". If i write Recv first, program stucks there (no data, blocking receive). What i`m doint wrong?

My studio is visual c++ 2010 express. MPI from HPC SDK 2008 (32 bit).

2
It's been a while since I wrote MPI programs. Take my comments with a grain of salt. When the run time environment starts the MPI programs, it assigns each process a unique, MPI-specific, ID. Based on the value of the ID, you either send or receive. If you use send without a process ready to receive, you are obviously going to be in a deadlock.R Sahu
MPI_comm_rank returns 0 (index in processes group) - im first in group :) So, first zero in send-recv parameters (0) - its my index.Alexey
Also, I try to write multithreaded program. In first thread I MPI_Recv data and in second I Sleep(100) and MPI_Send data (rank in all threads == 0). MPI_Send succefully sends data and MPI_Recv didnt receive.Alexey
It's been such a long time for me. I hope somebody else can help you.R Sahu
Anyway - thanks for help :)Alexey

2 Answers

2
votes

You need something like this:

assert(size >= 2);
if (rank == 0)
    MPI_Send( &b, 1, MPI_INT, 1,0, MPI_COMM_WORLD );
if (rank == 1)
    MPI_Recv( &a, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &st );

The idea of MPI is that the whole system operates in lockstep. And sometimes you do need to be aware of which participant you are in the "world." In this case, assuming you have two members (as per my assert), you need to make one of them send and the other receive.

Note also that I changed the "dest" parameter of the send, because 0 needs to send to 1 therefore 1 needs to receive from 0.

You can later do it the other way around if you wish (if each needs to tell the other something), but in such a case you may find even more efficient ways to do it using "collective operations" where you can exchange (both send and receive) with all the peers.

1
votes

In your example code, you're sending to and receiving from rank 0. If you are only running your MPI program with 1 process (which makes no sense, but we'll accept it for the sake of argument), you could make this work by using non-blocking calls instead of the blocking version. It would change your program to look like this:

MPI_Init( NULL, NULL );
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);

int a, b = 5;
MPI_Status st[2];
MPI_Request request[2];

MPI_Isend( &b, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &request[0] );
MPI_Irecv( &a, 1, MPI_INT, 0,0, MPI_COMM_WORLD, &request[1] );
MPI_Waitall( request, st );

That would let both the send and the receive complete at the same time. The reason your MPI version doesn't like your original code (which is very nice of it to tell you such a thing) is because the call to MPI_SEND could block until the matching MPI_RECV is done, which in this case wouldn't occur because it would only get called after the MPI_SEND is over, which is a circular dependency.

In MPI, when you add an 'I' before an MPI call, it means "Immediate", as in, the call will return immediately and complete all the work later, when you call MPI_WAIT (or some version of it, like MPI_WAITALL in this example). So what we did here was to make the send and receive return immediately, basically just telling MPI that we intend to do a send and receive with rank 0 at some point in the future, then later (the next line), we tell MPI to go ahead and finish those calls now.

The benefit of using the immediate version of these calls is that theoretically, MPI can do some things in the background to let the send and receive calls make progress while your application is doing something else that doesn't rely on the result of that data. Then, when you finish the call to MPI_WAIT* later, the data is available and you can do whatever you need to do.