6
votes

After much Googling, I have no idea what's causing this issue. Here it is:

I have a simple call to MPI_Allgather in my code which I have double, triple, and quadruple-checked to be correct (send/receive buffers are properly sized; the send/receive sizes in the call are correct), but for 'large' numbers of processes I get either a deadlock or an MPI_ERR_TRUNCATE. The communicator being used for the Allgather is split from MPI_COMM_WORLD using MPI_Comm_split. For my current testing, rank 0 goes to one communicator, and the remaining ranks go to a second communicator. For 6 total ranks or less, the Allgather works just fine. If I use 7 ranks, I get an MPI_ERR_TRUNCATE. 8 ranks, deadlock. I have verified that the communicators were split correctly (MPI_Comm_rank and MPI_Comm_size is correct on all ranks for both Comms).

I have manually verified the size of each send and receive buffer, and the maximal number of receives. My first workaround was to swap the MPI_Allgather for a for-loop of MPI_Gather's to each process. This worked for that one case, but changing the meshes given to my code (CFD grids being partitioned using METIS) brought the problem back. Now my solution, which I haven't been able to break (yet), is to replace the Allgather with an Allgatherv, which I suppose is more efficient anyways since I have a different number of pieces of data being sent from each process.

Here's the (I hope) relevant offending code in context; if I've missed something, the Allgather in question is on line 599 of this file.

  // Get the number of mpiFaces on each processor (for later communication)
  // 'nProgGrid' is the size of the communicator 'gridComm'
  vector<int> nMpiFaces_proc(nProcGrid);

  // This MPI_Allgather works just fine, every time      
  // int nMpiFaces is assigned on preceding lines
  MPI_Allgather(&nMpiFaces,1,MPI_INT,nMpiFaces_proc.data(),1,MPI_INT,gridComm);

  int maxNodesPerFace = (nDims==2) ? 2 : 4;
  int maxNMpiFaces = getMax(nMpiFaces_proc);
  // The matrix class is just a fancy wrapper around std::vector that 
  // allows for (i,j) indexing.  The getSize() and getData() methods just
  // call the size() and data() methods, respectively, of the underlying
  // vector<int> object.
  matrix<int> mpiFaceNodes_proc(nProcGrid,maxNMpiFaces*maxNodesPerFace);
  // This is the MPI_Allgather which (sometimes) doesn't work.
  // vector<int> mpiFaceNodes is assigned in preceding lines
  MPI_Allgather(mpiFaceNodes.data(),mpiFaceNodes.size(),MPI_INT,
                mpiFaceNodes_proc.getData(),maxNMpiFaces*maxNodesPerFace,
                MPI_INT,gridComm);

I am currently using OpenMPI 1.6.4, g++ 4.9.2, and an AMD FX-8350 8-core processor with 16GB of RAM, running the latest updates of Elementary OS Freya 0.3 (basically Ubuntu 14.04). However, I have also had this issue on another machine using CentOS, Intel hardware, and MPICH2.

Any ideas? I have heard that it could be possible to change MPI's internal buffer size(s) to fix similar issues, but a quick try to do so (as shown in http://www.caps.ou.edu/pipermail/arpssupport/2002-May/000361.html) had no effect.

For reference, this issue is very similar to the one shown here: https://software.intel.com/en-us/forums/topic/285074, except that in my case, I have only 1 processor with 8 cores, on a single desktop computer.

UPDATE I've managed to put together a minimalist example of this failure:

#include <iostream>
#include <vector>
#include <stdlib.h>
#include <time.h>

#include "mpi.h"

using namespace std;

int main(int argc, char* argv[])
{
  MPI_Init(&argc,&argv);

  int rank, nproc, newID, newRank, newSize;
  MPI_Comm newComm;
  MPI_Comm_rank(MPI_COMM_WORLD,&rank);
  MPI_Comm_size(MPI_COMM_WORLD,&nproc);

  newID = rank%2;
  MPI_Comm_split(MPI_COMM_WORLD,newID,rank,&newComm);
  MPI_Comm_rank(newComm,&newRank);
  MPI_Comm_size(newComm,&newSize);

  srand(time(NULL));

  // Get a different 'random' number for each rank on newComm
  //int nSend = rand()%10000;
  //for (int i=0; i<newRank; i++) nSend = rand()%10000;

  /*! -- Found a set of #'s which fail for nproc=8: -- */
  int badSizes[4] = {2695,7045,4256,8745};
  int nSend = badSizes[newRank];

  cout << "Comm " << newID << ", rank " << newRank << ": nSend = " << nSend << endl;

  vector<int> send(nSend);
  for (int i=0; i<nSend; i++) 
    send[i] = rand();

  vector<int> nRecv(newSize);
  MPI_Allgather(&nSend,1,MPI_INT,nRecv.data(),1,MPI_INT,newComm);

  int maxNRecv = 0;
  for (int i=0; i<newSize; i++)
    maxNRecv = max(maxNRecv,nRecv[i]);

  vector<int> recv(newSize*maxNRecv);
  MPI_Barrier(MPI_COMM_WORLD);
  cout << "rank " << rank << ": Allgather-ing data for communicator " << newID << endl;
  MPI_Allgather(send.data(),nSend,MPI_INT,recv.data(),maxNRecv,MPI_INT,newComm);
  cout << "rank " << rank << ": Done Allgathering-data for communicator " << newID << endl;

  MPI_Finalize();
  return 0;
}

The above code was compiled and run as:

mpicxx -std=c++11 mpiTest.cpp -o mpitest
mpirun -np 8 ./mpitest

with the following output on both my 16-core CentOS and my 8-core Ubuntu machines:

Comm 0, rank 0: nSend = 2695
Comm 1, rank 0: nSend = 2695
Comm 0, rank 1: nSend = 7045
Comm 1, rank 1: nSend = 7045
Comm 0, rank 2: nSend = 4256
Comm 1, rank 2: nSend = 4256
Comm 0, rank 3: nSend = 8745
Comm 1, rank 3: nSend = 8745
rank 5: Allgather-ing data for communicator 1
rank 6: Allgather-ing data for communicator 0
rank 7: Allgather-ing data for communicator 1
rank 0: Allgather-ing data for communicator 0
rank 1: Allgather-ing data for communicator 1
rank 2: Allgather-ing data for communicator 0
rank 3: Allgather-ing data for communicator 1
rank 4: Allgather-ing data for communicator 0
rank 5: Done Allgathering-data for communicator 1
rank 3: Done Allgathering-data for communicator 1
rank 4: Done Allgathering-data for communicator 0
rank 2: Done Allgathering-data for communicator 0

Note that only 2 of the ranks from each communicator exit the Allgather; this isn't what happens in my actual code (no ranks on the 'broken' communicator exit the Allgather), but the end result is the same - the code hangs until I kill it.

I'm guessing this has something to do with the differing number of sends on each process, but as far as I can tell from the MPI documentation and tutorials I've seen, this is supposed to be allowed, correct? Of course, the MPI_Allgatherv is a little more applicable, but for reasons of simplicity I have been using Allgather instead.

1
Your second code snippet isn't consistent with your description. How about posting the actual code you are running? And try to create MCVE.Jeff Hammond
I've updated the code snippet and provided a link to the full (2000+ line) file; my original MPI_Gather workaround stopped working so I removed that snippet. I may have had some luck duplicating the effects in a simple program that allocates a random-sized vector (up to a max 10,000 int's) on each rank and performs the above MPI_Allgather, but I'm not certain and I probably won't be able to get back to it today. I'll post it when I get a chance.Jacob
If you use rand on diff procs, the args will not match. This is invalid use of MPI. Call rand on root and bcast to be consistent.Jeff Hammond
So you're saying that MPI_Allgather must receive the same value of "send_count" from each process calling it? My interpretation of the documentation was that that was not necessary, so long as the receive buffer was adequately sized. I suppose that would explain why using MPI_Allgatherv doesn't give the same error.Jacob
I can't find the exact text but I am certain tha count args must be symmetric for allgather.Jeff Hammond

1 Answers

5
votes

You must use MPI_Allgatherv if the input counts are not identical across all processes.

To be precise, what must match is the type signature count,type, since technically you can get to the same fundamental representation with different datatypes (e.g. N elements vs 1 element that is a contiguous type of N elements), but if you use the same argument everywhere, which is the common usage of MPI collectives, then your counts must match everywhere.

The relevant portion of the latest MPI standard (3.1) is on page 165:

The type signature associated with sendcount, sendtype, at a process must be equal to the type signature associated with recvcount, recvtype at any other process.