3
votes

I need to know something about CUDA shared memory. Let's say I assign 50 blocks with 10 threads per block in a G80 card. Each SM processor of a G80 can handle 8 blocks simultaneously. Assume that, after doing some calculations, the shared memory is fully occupied.

What will be the values in shared memory when the next 8 new blocks arrive? Will the previous values reside there? Or will the previous values be copied to global memory and the shared memory refreshed for next 8 blocks?

2
can you explain a little better? i am not sure i'm following youlurscher
My interpretation/simplification: Each block requires all of the shared memory of one processor, ie each processor can host only one block at a time. Enough blocks are launched that the processors will host two blocks in order to complete the kernel execution. Now, looking at one processor... after it executes the first block, the shared memory has been used and may have meaningful values in it. Will these values still be there for the second block, and will they be at the same address as they were the for the previous block?jmilloy
yes that is the answer i wantkar

2 Answers

6
votes

It states about the type qualifiers:

  1. Variables in registers for a thread, only stays in kernel
  2. Variables in global memory for a thread, only stays in kernel
  3. __device__ __shared__ type variable in shared memory for a block, only stays in kernel
  4. __device__ type variable in global memory for a grid, stays until the application exits
  5. __device__ __constant__ type variable for a grid, stays until the application exits

thus from this reference, the answer to your question is the memory should be refreshed for the next 8 blocks if they reside in shared memory of your device.

2
votes

For kernel blocks, the execution order and SMs are randomly assigned. In that sense, even if the old value or address preserves, it is hard to keep things in track. I doubt there is even a way to do that. Communication between blocks are done via off chip memory. The latency associated with off chip memory is the performance killer, which makes gpu programming tricky. In Fermi cards, blocks share some L2 cache, but one can't alter the behavior of these caches.