3
votes

I'm trying to figure out what exactly each of the metrics reported by "nvprof" are. More specifically I can't figure out which transactions are System Memory and Device Memory read and writes. I wrote a very basic code just to help figure this out.

#define TYPE float
#define BDIMX 16
#define BDIMY 16
#include <cuda.h>
#include <cstdio>
#include <iostream>
__global__ void kernel(TYPE *g_output, TYPE *g_input, const int dimx, const int dimy)
{
__shared__ float s_data[BDIMY][BDIMX];
  int ix = blockIdx.x * blockDim.x + threadIdx.x;
  int iy = blockIdx.y * blockDim.y + threadIdx.y;
  int in_idx = iy * dimx + ix; // index for reading input
  int tx = threadIdx.x; // thread’s x-index into corresponding shared memory tile  
  int ty = threadIdx.y; // thread’s y-index into corresponding shared memory tile 
  s_data[ty][tx] = g_input[in_idx];
  __syncthreads();
  g_output[in_idx] = s_data[ty][tx] * 1.3;
  }


int main(){
  int size_x = 16, size_y = 16;
  dim3 numTB;
    numTB.x = (int)ceil((double)(size_x)/(double)BDIMX) ;
    numTB.y = (int)ceil((double)(size_y)/(double)BDIMY) ;
  dim3 tbSize; 
  tbSize.x = BDIMX;
  tbSize.y = BDIMY;
  float* a,* a_out;
  float *a_d = (float *) malloc(size_x * size_y * sizeof(TYPE));
  cudaMalloc((void**)&a,     size_x * size_y * sizeof(TYPE));
  cudaMalloc((void**)&a_out, size_x * size_y * sizeof(TYPE));
  for(int index = 0; index < size_x * size_y; index++){
      a_d[index] = index;
   }
  cudaMemcpy(a, a_d, size_x * size_y * sizeof(TYPE), cudaMemcpyHostToDevice);
  kernel <<<numTB, tbSize>>>(a_out, a, size_x, size_y);
  cudaDeviceSynchronize();
  return 0;
}

Then I run nvprof --metrics all for the output to see all the metrics. This is the part I'm interested in:

           Metric Name                        Metric Description         Min         Max         Avg
Device "Tesla K40c (0)"
  Kernel: kernel(float*, float*, int, int)
    local_load_transactions                   Local Load Transactions           0           0           0
   local_store_transactions                  Local Store Transactions           0           0           0
   shared_load_transactions                  Shared Load Transactions           8           8           8
  shared_store_transactions                 Shared Store Transactions           8           8           8
           gld_transactions                  Global Load Transactions           8           8           8
           gst_transactions                 Global Store Transactions           8           8           8
   sysmem_read_transactions           System Memory Read Transactions           0           0           0
  sysmem_write_transactions          System Memory Write Transactions           4           4           4
     tex_cache_transactions                Texture Cache Transactions           0           0           0
     dram_read_transactions           Device Memory Read Transactions           0           0           0
    dram_write_transactions          Device Memory Write Transactions          40          40          40
       l2_read_transactions                      L2 Read Transactions          70          70          70
      l2_write_transactions                     L2 Write Transactions          46          46          46

I understand the shared and global accesses. The global accesses are coalesced and since there are 8 warps, there are 8 transactions. But I can't figure out the system memory and device memory write transaction numbers.

1

1 Answers

4
votes

It helps if you have a model of the GPU memory hierarchy with both logical and physical spaces, such as the one here.

Referring to the "overview tab" diagram:

  1. gld_transactions refer to transactions issued from the warp targetting the global logical space. On the diagram, this would be the line from the "Kernel" box on the left to the "global" box to the right of it, and the logical data movement direction would be from right to left.

  2. gst_transactions refer to the same line as above, but logically from left to right. Note that these logical global transaction could hit in a cache and not go anywhere after that. From the metrics standpoint, those transaction types only refer to the indicated line on the diagram.

  3. dram_write_transactions refer to the line on the diagram which connects device memory on the right with L2 cache, and the logical data flow is from left to right on this line. Since the L2 cacheline is 32 bytes (whereas the L1 cacheline and size of a global transaction is 128 bytes), the device memory transactions are also 32 bytes, not 128 bytes. So a global write transaction that passes through L1 (it is a write-through cache if enabled) and L2 will generate 4 dram_write transactions. This should explain 32 out of the 40 transactions.

  4. system memory transactions target zero-copy host memory. You don't seem to have that so I can't explain those.

Note that in some cases, for some metrics, on some GPUs, the profiler may have some "inaccuracy" when launching very small numbers of threadblocks. For example, some metrics are sampled on a per-SM basis and scaled. (device memory transactions are not in this category, however). If you have disparate work being done on each SM (perhaps due to a very small number of threadblocks launched) then the scaling can be misleading/less accurate. Generally if you launch a larger number of threadblocks, these usually become insignificant.

This answer may also be of interest.