I want to write effective parallel application for Intel Xeon Phi coprocessor (61 cores), which does five-point stencil calculation. I wrote two versions of the code.
First: I used OpenMP "#pragma omp parralel for"
void ParallelStencil(const double* macierzIn, double* macierzOut, const int m, const int n)
{
int m_real = m + 2;
int n_real = n + 2;
TimeCPU t;
t.start();
#pragma omp parallel for schedule(static,1) shared(macierzIn, macierzOut)
for(int i=1; i<m_real-1; ++i)
{
for(int j=1; j<n-1; ++j)
{
macierzOut[i * n_real + j] = Max(macierzIn[i * n_real + j], macierzIn[(i - 1) * n_real + j], macierzIn[(i + 1) * n_real + j],
macierzIn[i * n_real + (j - 1)], macierzIn[i * n_real + (j + 1)]);
}
}
t.stop();
cout << "\nTime: " << t.time();
}
Second: I divided matrix between 61 cores. Each part of matrix is computed by 4 HW threads running by each core. I this version, I tried reduce cache miss by doing calculations for 4 threads around the same L2 cache.
void ParallelStencil(const double* macierzIn, double* macierzOut, int m, int n)
{
int m_real = m + 2;
int n_real = m + 2;
int coreCount = threadsCount / 4;
int tID, coreNum, start, stop, step;
TimeCPU t;
t.start();
#pragma omp parallel shared(macierzIn, macierzOut, m, n, m_real, n_real, coreCount) private(tID, coreNum, start, stop, step)
{
tID = omp_get_thread_num();
coreNum = tID / 4;
start = tID % 4 + ((m / coreCount) * coreNum) + 1;
stop = (m / coreCount) * (coreNum + 1) + 1;
if(coreNum == coreCount - 1 && stop != m_real - 1)
{
stop = m_real -1;
}
step = 4;
for(int i=start; i<stop; i+=step)
{
for(int j=1; j<n+1; ++j)
{
macierzOut[i * n_real + j] = Max(macierzIn[i * n_real + j], macierzIn[(i - 1) * n_real + j], macierzIn[(i + 1) * n_real + j],
macierzIn[i * n_real + (j - 1)], macierzIn[i * n_real + (j + 1)]);
}
}
}
t.stop();
cout << "\nTime: " << t.time();
}
In this wersion loop iterations in each part of matrix are executed in this way:
i=0 -> thread 0
i=1 -> thread 1
i=2 -> thread 2
i=3 -> thread 3
i=4 -> thread 0
...
After running this code. Second version was slower. But why?