1
votes

1st question:

I wonder how I can parallelize function calls to the same function, but with different input parameters in a for loop. For example (C code):

//a[i] and b[i] are defined as elements of a list with 2 columns and N rows
//i is the row number

#pragma omp parallel
{
char cmd[1000];
  #pragma omp for nowait
  for(i=0; i<N; i++) {
    //call the serial programm
    sprintf(cmd, "./serial_program %f %f", a[i], b[i]);
    system(cmd);
  }
}

If I just apply a pragma omp for (+the omp header of course) nothing happens. Maybe this is not possible with OpenMP, but would it be possible with MPI and how would it look like then? I have experience only with OpenMP so far, but not with MPI. update: defined cmd within parallel region

Status: solved

2nd question:

If i have a OpenMP parallelized program and i want to use it among different nodes within a cluster, how can i distribute the calls among the different nodes with MPI and how would i compile it?

//a[i] and b[i] are defined as elements of a list with 2 columns and N rows
//i is the row number

  for(i=0; i<N; i++) {
    //call the parallelized program
    sprintf(cmd, "./openmp_parallelized_program %f %f", a[i], b[i]);
    system(cmd);
  }

Status: unsolved

2
did you set a number of threads for omp (omp_set_num_threads())?stefan
I just set export OMP_NUM_THREADS=8 before executionuser2015521
did you try using #pragma omp parallel for? You must have #pragma omp parallel somewhere to let omp spawn threads.stefan
O well I forgot it in the example above, but in my original code i did not forget it. Code updated^user2015521
The problem in question 1 was that i defined cmd outside the parallel region, such that the program was executed with the same input parameters by all threads. I added a second question^^user2015521

2 Answers

0
votes

MPI is a method to communicate between nodes of a computing cluster. It enables one motherboard to talk to another. MPI is for clusters and large computing tasks, it is not for parallelizing desktop applications.

Communications in MPI are done by explicitly sending and receiving data.

Unlike OpenMP, there is no #pragma that will automatically facilitate parallelization.

Also there is something really messed up about the code that you posted, specifically, it is a C program that acts like a bash script.

#!/bin/bash
N=10
for i in `seq 1 $N`;
do
./program $i &
done

On many clusters calls to system will execute only on the host node, resulting in no speedup and io problems. The command you showed is wholly unworkable.

0
votes

With MPI you would do something like:

int rank, size;

MPI_Init();
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

int start = (rank*N)/size;
int end = ((rank+1)*N)/size;

for (i = start; i < end; i++)
{
   sprintf(cmd, "./openmp_parallelized_program %f %f", a[i], b[i]);
   system(cmd);
}

MPI_Finalize();

Then run the MPI job with one process per node. There is a caveat though. Some MPI implementations do not allow processes to call fork() under certain conditions (and system() calls fork()), e.g. if they communicate over RDMA-based networks like InfiniBand. Instead, you could merge both programs in order to create one hybrid MPI/OpenMP program.