4
votes

i'm trying to multiply a square matrix by a vector using MPI and C. I must use MPI_Allgather to send all the parts of the matrix to all the processes. This is what i have so far

#include "mpi.h" 
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <time.h>
#define DIM 500

int main(int argc, char *argv[])
{

      int i, j, n; 
      int nlocal;        /* Number of locally stored rows of A */ 
      double *fb;
      double a[DIM*DIM], b[DIM], x[DIM];     /* Will point to a buffer that stores the entire vector b */ 
      int npes, myrank; 
      MPI_Status status; 

       MPI_Init(&argc,&argv);

     /* Get information about the communicator */ 
     MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
     MPI_Comm_size(MPI_COMM_WORLD, &npes);  

     /* Allocate the memory that will store the entire vector b */ 
     fb = (double*)malloc(n*sizeof(double)); 

     nlocal = n/npes; 

     /* Gather the entire vector b on each processor using MPI's ALLGATHER operation */ 
     MPI_Allgather(b, nlocal, MPI_DOUBLE, fb, nlocal, MPI_DOUBLE, MPI_COMM_WORLD); 

     /* Perform the matrix-vector multiplication involving the locally stored submatrix */ 
     for (i=0; i<nlocal; i++) { 
       x[i] = 0.0; 
       for (j=0; j<n; j++) 
         x[i] += a[i*n+j]*fb[j]; 
     } 


     free(fb);

     MPI_Finalize();   
}//end main 

OK Now it works! It compiles but then i get an mpi allgather internal error! I got this also with other solution i tried.

Fatal error in MPI_Allgather: Internal MPI error!, error stack:
MPI_Allgather(961).......: MPI_Allgather(sbuf=0xa1828, scount=407275437, MPI_DOU                                                                                                                BLE, rbuf=0xf61d0008, rcount=407275437, MPI_DOUBLE, MPI_COMM_WORLD) failed
MPIR_Allgather_impl(807).:
MPIR_Allgather(766)......:
MPIR_Allgather_intra(560):
MPIR_Localcopy(357)......: memcpy arguments alias each other, dst=0xb8513d70 src                                                                                                                =0xa1828 len=-1036763800
Fatal error in MPI_Allgather: Internal MPI error!, error stack:
MPI_Allgather(961).......: MPI_Allgather(sbuf=0xa1828, scount=407275437, MPI_DOU                                                                                                                BLE, rbuf=0xf61d0008, rcount=407275437, MPI_DOUBLE, MPI_COMM_WORLD) failed
MPIR_Allgather_impl(807).:
MPIR_Allgather(766)......:
MPIR_Allgather_intra(560):
MPIR_Localcopy(357)......: memcpy arguments alias each other, dst=0x3cb9b840 src                                                                                                                =0xa1828 len=-1036763800
Fatal error in MPI_Allgather: Internal MPI error!, error stack:
MPI_Allgather(961).......: MPI_Allgather(sbuf=0xa1828, scount=407275437, MPI_DOU                                                                                                                BLE, rbuf=0xf61d0008, rcount=407275437, MPI_DOUBLE, MPI_COMM_WORLD) failed
MPIR_Allgather_impl(807).:
MPIR_Allgather(766)......:
MPIR_Allgather_intra(560):
MPIR_Localcopy(357)......: memcpy arguments alias each other, dst=0x7a857ad8 src                                                                                                                =0xa1828 len=-1036763800

Can anyone help me?

1
a is a two-dimensional array, but you're accessing it like it only has one dimension. The syntax would be a[??][??] rather than a[??], but what you put in place of the ??s depends on your data layout. - user786653
i'm acessing it like one dimention because it is broken up through rows in the diferent processes. - Mário Santana
I've tried that but still gives me that error. Don't know how to turn this arround - Mário Santana

1 Answers

2
votes

Any way you are accessing the two dimensional array using one dimension(row or column major), you can allocate two dimensional array as a[DIM*DIM] instead of a[DIM][DIM] and access this in a linear fashion. This solution is working for me.