3
votes

In a CUDA kernel, I have code similar to the following. I am trying to calculate one numerator per thread, and accumulate the numerators over the block to calculate a denominator, and then return the ratio. However, CUDA is setting the value of denom to whatever value is calculated for numer by the thread in the block with the largest threadIdx.x, rather than the sum of the numer value calculated across all the threads in the block. Does anyone know what is going on?

extern __shared__ float s_shared[];

float numer = //calculate numerator

s_shared[threadIdx.x] = numer;
s_shared[blockDim.x] += numer;
__syncthreads();

float denom = s_shared[blockDim.x];
float result = numer/denom;

"result" should always be between 0 and 1 and should sum to 1 across the block, but instead it is equal to 1.0 for every thread where threadIdx.x is the maximum, and some other value not confined to the range for the other threads in the block.

1

1 Answers

4
votes

You're not synchronizing the summing properly to the blockDim.x location. None of the threads are waiting to see what others have written before adding their sum. Sort of like

  • Everyone reads zero,
  • goes home, calculates zero + numer.
  • Everone writes zero+numer to the memory location

The high threadId wins b/c it has a high likelihood of acting last, I suppose.

What you want to do instead, in order to do a quick sum, is to do a binary sum on s_shared[threadIdx.x]

  • everyone writes their numer
  • half the threads calculate sums of pairs and write those to a new location
  • a quarter of the threads caluclate the sums of pairs of pairs, and write those to a new location
  • etc
  • until you just have one thread and one sum

This takes O(n) work and O(log n) time.