3
votes

I was trying to find the best work-group size for a problem and I figured out something that I couldn't justify for myself.

These are my results :

  • GlobalWorkSize {6400 6400 1}, WorkGroupSize {64 4 1}, Time(Milliseconds) = 44.18
  • GlobalWorkSize {6400 6400 1}, WorkGroupSize {4 64 1}, Time(Milliseconds) = 24.39

Swapping axes caused a twice faster execution. Why !?

By the way, I was using an AMD GPU.

Thanks :-)

EDIT : This is the kernel (a Simple Matrix Transposition):

__kernel void transpose(__global float *input, __global float *output, const int size){
    int i = get_global_id(0);
    int j = get_global_id(1);
    output[i*size + j] = input[j*size + i];
}
1
I guess it depends on the kernel ;]Thomas
can you post the kernel please? this is a matter of how it is iterating through computations or memory operations.mfa

1 Answers

6
votes

I agree with @Thomas, it most probably depends on your kernel. Most probably, in the second case you access memory in a coalescent way and/or make a full use of memory transaction.

Coalescence: When threads need to access elements in the memory the hardware tries to access these elements in as less as possible transactions i.e. if the thread 0 and the thread 1 have to access contiguous elements there will be only one transaction.

full use of a memory transaction: Let's say you have a GPU that fetches 32 bytes in one transaction. Therefore if you have 4 threads that need to fetch one int each you are using only half of the data fetched by the transaction; you waste the rest (assuming an int is 4 bytes).

To illustrate this, let's say that you have a n by n matrix to access. Your matrix is in row major, and you use n threads organized in one dimension. You have two possibilities:

  1. Each workitem takes care of one column, looping through each column element one at a time.
  2. Each workitem takes care of one line, looping through each line element one at a time.

It might be counter-intuitive, but the first solution will be able to make coalescent access while the second won't be. The reason is that when the first workitem will need to access the first element in the first column, the second workitem will access the first element in the second column and so on. These elements are contiguous in the memory. This is not the case for the second solution.

Now if you take the same example, and apply the solution 1 but this time you have 4 workitems instead of n and the same GPU I've just spoken before you'll most probably increase the time by a factor 2 since you will waste half of your memory transactions.

EDIT: Now that you posted your kernel I see that I forgot to mention something else.

With your kernel, it seems that choosing a local size of (1, 256) or (256, 1) is always a bad choice. In the first case 256 transactions will be necessary to read a column (each fetching 32 bytes out of which only 4 will be used - keeping in mind the same GPU of my previous examples) in input while 32 transactions will be necessary to write in output: You can write 8 floats in one transaction hence 32 transactions to write the 256 elements.

This is the same problem with a workgroup size of (256, 1) but this time using 32 transactions to read, and 256 to write.

So why the first size works better? It's because there is a cache system, that can mitigate the bad access for the read part. Therefore the size (1, 256) is good for the write part and the cache system handle the not very good read part, decreasing the number of necessary read transactions.

Note that the number of transactions decreases overall (taking into considerations all the workgroups within the NDRange). For example the first workgroup issues the 256 transactions, to read the 256 first elements of the first column. The second workgroup might just go in the cache to retrieve the elements of the second column because they were fetched by the transactions (of 32 bytes) issued by the first workgroup.

Now, I'm almost sure that you can do better than (1, 256) try (8, 32).