2
votes

I have two CUDA kernels that compute similar stuff. One is using global memory (myfun is a device function that reads a lot from global memory and do the computation). The second kernel transfers that chunk of data from global memory to shared memory so the data can be shared among different threads of a block. My kernel that uses global memory is much faster than the one with shared memory. What are the possible reasons?

loadArray just copies a small part of d_x to m.

__global__ void mykernel(float *d_x, float *d_y, int *d_z, float *d_u, int N, int K, int D)
{

  int tid = blockIdx.x*blockDim.x + threadIdx.x;
  int index = 0;
  float max_s = 1e+37F;


  if (tid < N)
    {

      for (int i = 0; i < K; i++)
        {

          float s = myfun(&d_x[i*D], d_y, tid);

          if (s > max_s)
            {
              max_s = s;
              index = i;
            }
        }

      d_z[tid] = index;
      d_u[tid] = max_s;
    }
}

Using shared memory:

__global__ void mykernel(float *d_x, float *d_y, int *d_z, float *d_u, int N, int K)
{
  int tid = blockIdx.x*blockDim.x + threadIdx.x;
  int index = 0;
  float max_s = 1e+37F;

  extern __shared__ float m[];
  if( threadIdx.x == 0 )
    loadArray( m, d_x );
  __syncthreads();

  if (tid < N)
    {

      for (int i = 0; i < K; i++)
        {

          float s = myfun(m, d_y, tid);

          if (s > max_s)
            {
              max_s = s;
              index = i;
            }
        }

      d_z[tid] = index;
      d_u[tid] = max_s;
    }
}
2

2 Answers

4
votes

The problem is that only the first thread in each block is reading from global memory into shared memory, this is much slower than letting all threads read from global memory simultaneously.

Using shared memory is an advantage when a single thread needs to access neighbouring elements from global memory - but this doesn't appear to be the case here.

0
votes

IMO, if you have parallel nsight installed on say a Windows machine and conduct a trace on the executions, you might have more insights. Alternatively, run the cudaprof through your app to try to figure out where are the possible latencies.