Luchian gives an explanation of why this behavior happens, but I thought it'd be a nice idea to show one possible solution to this problem and at the same time show a bit about cache oblivious algorithms.
Your algorithm basically does:
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
A[j][i] = A[i][j];
which is just horrible for a modern CPU. One solution is to know the details about your cache system and tweak the algorithm to avoid those problems. Works great as long as you know those details.. not especially portable.
Can we do better than that? Yes we can: A general approach to this problem are cache oblivious algorithms that as the name says avoids being dependent on specific cache sizes [1]
The solution would look like this:
void recursiveTranspose(int i0, int i1, int j0, int j1) {
int di = i1 - i0, dj = j1 - j0;
const int LEAFSIZE = 32; // well ok caching still affects this one here
if (di >= dj && di > LEAFSIZE) {
int im = (i0 + i1) / 2;
recursiveTranspose(i0, im, j0, j1);
recursiveTranspose(im, i1, j0, j1);
} else if (dj > LEAFSIZE) {
int jm = (j0 + j1) / 2;
recursiveTranspose(i0, i1, j0, jm);
recursiveTranspose(i0, i1, jm, j1);
} else {
for (int i = i0; i < i1; i++ )
for (int j = j0; j < j1; j++ )
mat[j][i] = mat[i][j];
}
}
Slightly more complex, but a short test shows something quite interesting on my ancient e8400 with VS2010 x64 release, testcode for MATSIZE 8192
int main() {
LARGE_INTEGER start, end, freq;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&start);
recursiveTranspose(0, MATSIZE, 0, MATSIZE);
QueryPerformanceCounter(&end);
printf("recursive: %.2fms\n", (end.QuadPart - start.QuadPart) / (double(freq.QuadPart) / 1000));
QueryPerformanceCounter(&start);
transpose();
QueryPerformanceCounter(&end);
printf("iterative: %.2fms\n", (end.QuadPart - start.QuadPart) / (double(freq.QuadPart) / 1000));
return 0;
}
results:
recursive: 480.58ms
iterative: 3678.46ms
Edit: About the influence of size: It is much less pronounced although still noticeable to some degree, that's because we're using the iterative solution as a leaf node instead of recursing down to 1 (the usual optimization for recursive algorithms). If we set LEAFSIZE = 1, the cache has no influence for me [8193: 1214.06; 8192: 1171.62ms, 8191: 1351.07ms
- that's inside the margin of error, the fluctuations are in the 100ms area; this "benchmark" isn't something that I'd be too comfortable with if we wanted completely accurate values])
[1] Sources for this stuff: Well if you can't get a lecture from someone that worked with Leiserson and co on this.. I assume their papers a good starting point. Those algorithms are still quite rarely described - CLR has a single footnote about them. Still it's a great way to surprise people.
Edit (note: I'm not the one who posted this answer; I just wanted to add this):
Here's a complete C++ version of the above code:
template<class InIt, class OutIt>
void transpose(InIt const input, OutIt const output,
size_t const rows, size_t const columns,
size_t const r1 = 0, size_t const c1 = 0,
size_t r2 = ~(size_t) 0, size_t c2 = ~(size_t) 0,
size_t const leaf = 0x20)
{
if (!~c2) { c2 = columns - c1; }
if (!~r2) { r2 = rows - r1; }
size_t const di = r2 - r1, dj = c2 - c1;
if (di >= dj && di > leaf)
{
transpose(input, output, rows, columns, r1, c1, (r1 + r2) / 2, c2);
transpose(input, output, rows, columns, (r1 + r2) / 2, c1, r2, c2);
}
else if (dj > leaf)
{
transpose(input, output, rows, columns, r1, c1, r2, (c1 + c2) / 2);
transpose(input, output, rows, columns, r1, (c1 + c2) / 2, r2, c2);
}
else
{
for (ptrdiff_t i1 = (ptrdiff_t) r1, i2 = (ptrdiff_t) (i1 * columns);
i1 < (ptrdiff_t) r2; ++i1, i2 += (ptrdiff_t) columns)
{
for (ptrdiff_t j1 = (ptrdiff_t) c1, j2 = (ptrdiff_t) (j1 * rows);
j1 < (ptrdiff_t) c2; ++j1, j2 += (ptrdiff_t) rows)
{
output[j2 + i1] = input[i2 + j1];
}
}
}
}