I'm trying to implement MPI function MPI_Scatter using MPI_Send and MPI_Recv.
I would like to use the official declaration of the function which use pointers for the vectors/arrays.
MPI_Scatter(
void* send_data,
int send_count,
MPI_Datatype send_datatype,
void* recv_data,
int recv_count,
MPI_Datatype recv_datatype,
int root,
MPI_Comm communicator)
I have created an example which works great with MPI MPI_Scatter. It shows the correct result.
I have to function to implements this stuff, one with pointers and one with statics arrays of integers. The second works ok, but the first just show the first three elements of the array created. I think this is an issue related with the allocated memory of the matrix, but I can't see a way to fix it.
HERE IS THE CODE (MMPI_Scatter is getting me the error):
#include <stdio.h>
#include <mpi.h>
#include <stdlib.h>
#include <math.h>
#include <unistd.h>
#define ROOT 0
#define N 3
int main(int argc, char **argv) {
// for storing this process' rank, and the number of processes
int rank, np;
int *matrix;
//MPI_Scatter
int send_count, recv_count;
int *recv_data;
MPI_Status status, info;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if (rank == ROOT) {
matrix = createMatrix(np, np);
printArray(matrix, np * np);
}
send_count = np;
recv_count = np;
recv_data = malloc(recv_count * sizeof(int));
//The original function provided by MPI works great!!
MPI_Scatter(matrix, send_count, MPI_INT, recv_data, recv_count, MPI_INT, ROOT, MPI_COMM_WORLD);
//This function just show the first three elements of the matrix
//MMPI_Scatter(matrix, send_count, MPI_INT, recv_data, recv_count, MPI_INT, ROOT, MPI_COMM_WORLD);
//This function works great, but it not use the official declaration of the MPI_Scatter
//MMPI_Scatter2(matrix, send_count, MPI_INT, recv_data, recv_count, MPI_INT, ROOT, MPI_COMM_WORLD);
printArray(recv_data , recv_count);
MPI_Finalize();
return 0;
}
//http://mpitutorial.com/tutorials/mpi-scatter-gather-and-allgather/
void MMPI_Scatter(void* send_data, int send_count, MPI_Datatype send_datatype,
void* recv_data, int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {
int np, rank;
int i;
MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);
printArray(send_data, np * np);
if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + (i * send_count), send_count, send_datatype, i, 0, communicator);
}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);
printArray(send_data, np * np);
}
//Works great, but without pointer
void MMPI_Scatter2(int send_data[], int send_count, MPI_Datatype send_datatype,
int recv_data[], int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {
int np, rank;
int i;
MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);
if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + (i * send_count), send_count, send_datatype, i, 0, communicator);
}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);
printArray(recv_data, np);
}
int *createMatrix(int nRows, int nCols) {
int *matrix;
int h, i, j;
if ((matrix = malloc(nRows * nCols * sizeof(int))) == NULL) {
printf("Malloc error:");
exit(1);
}
//Test values
for (h = 0; h < nRows * nCols; h++) {
matrix[h] = h + 1;
}
return matrix;
}
UPDATE 1:
I think it related with the info in this link: https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node71.html#Node71
There is a line with:
MPI_Send(sendbuf + i*sendcount*extent(sendtype), sendcount, sendtype, i.....)
but I don't know how to treat extend(sendtype)
UPDATE 2:
Now it works, but at the moment because I know the datatype by myself
void MMPI_Scatter(void* send_data, int send_count, MPI_Datatype send_datatype,
void* recv_data, int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {
int np, rank;
int i;
int size;
MPI_Datatype type;
type = MPI_INT;
MPI_Type_size(type, &size);
MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);
if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + ((i * send_count) * size), send_count, send_datatype, i, 0, communicator);
}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);
}
UPDATE 3 (SOLVED):
void MMPI_Scatter(void* send_data, int send_count, MPI_Datatype send_datatype,
void* recv_data, int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {
int np, rank;
int i;
int size;
MPI_Datatype type;
type = send_datatype;
MPI_Type_size(type, &size);
MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);
if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + ((i * send_count) * size), send_count, send_datatype, i, 0, communicator);
}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);
}
UPDATE 4
This function works ok, because of use the ROOT, but once is called from a collective, ROOT must be replaced by root like this:
if (rank == root) {
}
send_datatype
tell you about the size of the buffer? You are doing math on a pointer type on each iteration through thefor
loop and the results will be different forvoid*
andint*
. – jwdonahueMPI_Datatype
contain a size field? – jwdonahue