130
votes

I am making some matrix multiplication benchmarking, as previously mentioned in Why is MATLAB so fast in matrix multiplication?

Now I've got another issue, when multiplying two 2048x2048 matrices, there is a big difference between C# and others. When I try multiply only 2047x2047 matrices, it seems normal. Added some others for comparsion too.

1024x1024 - 10 seconds.

1027x1027 - 10 seconds.

2047x2047 - 90 seconds.

2048x2048 - 300 seconds.

2049x2049 - 91 seconds. (update)

2500x2500 - 166 seconds

That is three and a half minute difference for the 2k by 2k case.

using 2dim arrays

//Array init like this
int rozmer = 2048;
float[,] matice = new float[rozmer, rozmer];

//Main multiply code
for(int j = 0; j < rozmer; j++)
{
   for (int k = 0; k < rozmer; k++)
   {
     float temp = 0;
     for (int m = 0; m < rozmer; m++)
     {
       temp = temp + matice1[j,m] * matice2[m,k];
     }
     matice3[j, k] = temp;
   }
 }
10
This would be a great exam question for an advanced level C programming or OS Design class ;-)Dana the Sane
Have you tried testing both multidimensional [,] and jagged [][] arrays as well as 32 and 64 bit? I only tested a few times but jagged seemed more in-line with your results but jagged 64bit were high, I don't know if there are any heuristics in the jit that apply to this situation or if its cache related as previously suggested. If you want a GPGPU solution there is research.microsoft.com/en-us/projects/accelerator which should be competitive with the times in your other post.Kris
Somewhat naive question, but how many ops (adding/multiplying) are involved in multiplying two square matrices?Nick T

10 Answers

63
votes

This probably has do with conflicts in your L2 cache.

Cache misses on matice1 are not the problem because they are accessed sequentially. However for matice2 if a full column fits in L2 (i.e when you access matice2[0, 0], matice2[1, 0], matice2[2, 0] ... etc, nothing gets evicted) than there is no problem with cache misses with matice2 either.

Now to go deeper in how caches works, if byte address of your variable is X, than the cache line for it would be (X >> 6) & (L - 1). Where L is total number of cache lines in your cache. L is always power of 2. The six comes from fact that 2^6 == 64 bytes is standard size of cache line.

Now what does this mean? Well it means that if I have address X and address Y and (X >> 6) - (Y >> 6) is divisible by L (i.e. some large power of 2), they will be stored in the same cacheline.

Now to go back to your problem what is the difference between 2048 and 2049,

when 2048 is your size:

if you take &matice2[x, k] and &matice2[y, k] the difference (&matice2[x, k] >> 6) - (&matice2[y,k] >> 6) will be divisible by 2048 * 4 (size of float). So a large power of 2.

Thus depending on size of your L2 you will have a lot of cache line conflicts, and only utilize small portion of your L2 to store a column, thus you wont actually be able to store full column in your cache, thus you will get bad performance.

When size is 2049, then the difference is 2049 * 4 which is not power of 2 thus you will have less conflicts and your column will safely fit into your cache.

Now to test this theory there are couple things you can do:

Allocate your array matice2 array like this matice2 [razmor, 4096], and run with razmor = 1024, 1025 or any size, and you should see very bad performance compared to what you had before. This is because you forcefully align all columns to conflict with each other.

Then try matice2 [razmor, 4097] and run it with any size and you should see much better performance.

21
votes

Probably a caching effect. With matrix dimensions that are large powers of two, and a cache size that is also a power of two, you can end up only using a small fraction of your L1 cache, slowing things down a lot. Naive matrix multiplication is usually constrained by the need to fetch data into the cache. Optimized algorithms using tiling (or cache-oblivious algorithms) focus on making better use of L1 cache.

If you time other pairs (2^n-1,2^n) I expect you'll see similar effects.

To explain more fully, in the inner loop, where you access matice2[m,k], it's likely that matice2[m,k] and matice2[m+1,k] are offset from each other by 2048*sizeof(float) and thus map to the same index in the L1 cache. With an N-way associative cache you will have typically have 1-8 cache locations for all of these. Thus almost all of those accesses will trigger an L1 cache eviction, and fetching of data from a slower cache or main memory.

16
votes

This may have to do with the size of your cpu cache. If 2 rows of the matrix matrix do not fit, then you will loose time swapping in elements from RAM. The extra 4095 elements may just be enough to prevent rows from fitting.

In your case, 2 rows for 2047 2d matrices fall within 16KB of memory (assuming 32 bit types). For example, if you have an L1 cache (closest to the cpu on the bus) of 64KB, then you can fit at least 4 rows (of 2047 * 32) into the cache at once. With the longer rows if there is any padding required that pushes the pairs of rows beyond 16KB, then things start to get messy. Also, each time you 'miss' the cache, swapping in data from another cache or main memory delays things.

My guess is that the variance in run times you're seeing with the different sized matrices is affected by how effectively the operating system can make use of the available cache (and some combinations are just problematic). Of course this is all a gross simplification on my part.

11
votes

Louis Brandy wrote two blog posts analyzing exactly this issue:

More Cache Craziness and Computational Performance - A beginners case study with some interesting statistics and attempts to explain the behavior in more detail, it does indeed come down to cache size limitations.

5
votes

Given that the time is dropping at larger sizes wouldn't it be more likely to be cache conflicts, especially with powers of 2 for the problematic matrix sizes? I am no expert on caching issues, but excellent info on cache related performance issues here.

4
votes

As you are accessing the matice2 array vertically, it will be swapped in and out of the cache a lot more. If you mirror the array diagonally, so that you can access it using [k,m] instead of [m,k], the code will run a lot faster.

I tested this for 1024x1024 matrices, and it is about twice as fast. For 2048x2048 matrices it's about ten times faster.

4
votes

Cache Aliasing

Or cache thrashing, if I can coin a term.

Caches work by indexing with low order bits and tagging with high order bits.

Imaging that your cache has 4 words and your matrix is 4 x 4. When a column is accessed and the row is any power of two in length, then each column element in memory will map to the same cache element.

A power-of-two-plus-one is actually about optimum for this problem. Each new column element will map to the next cache slot exactly as if accessing by row.

In real life, a tag covers multiple sequentially increasing addresses which will cache several adjacent elements in a row. By offsetting the bucket that each new row maps to, traversing the column doesn't replace the previous entry. When the next column is traversed, the entire cache will be filled with different rows and each row section that fit into the cache will hit for several columns.

Since the cache is vastly faster than DRAM (mostly by virtue of being on-chip) hit rate is everything.

2
votes

You appear to have hit a cache size limit, or perhaps have some problems of repeatability in your timings.

Whatever the issue is, you simply should not write matrix multiplication yourself in C# and instead use an optimized version of the BLAS. That size of matrix should be multiplied in under a second on any modern machine.

1
votes

Effectively utilizing the cache hierarchy is very important. You need to make sure that multidimensional arrays have data in a nice arrangement, which can be accomplished by tiling. To do this you'll need to store the 2D array as a 1D array together with an indexing mechanism. The problem with the traditional method is that although two adjacent array elements that are in the same row are next to each other in memory, two adjacent elements in the same column will be separated by W elements in memory, where W is the number of columns. Tiling can make as much as a factor-of-ten performance difference.

0
votes

I suspect it is the result of something called "Sequential Flooding". What this is is that you are trying to loop through the list of objects that is slightly larger than the cache size, thus every single request to a the list (array) must be done from the ram, and you will not get a single cache hit.

In your case, you are looping through your arrays 2048 indexes 2048 times, but you only have space for 2047 (possibly due to some overhead from the array structure), so each time you acces an array pos, it needs to get this array pos from ram. It is then stored in the cache, but right before it is used again, it is dumped. So the cache is essentially useless, leading to a much longer execution time.