1
votes

I am new to CUDA and going through the CUDA toolkit documentation. There I found a example where matrix multiplication is using shared memory. Here when copying the Matrix structure from host memory to device memory only the data elements are copied. What I can't understand is how other variables are copied to device memory.

Matrix structure is as follows

typedef struct {
    int width;
    int height;
    int stride; 
    float* elements;
} Matrix;

Then here is the code sample where data transfer happens

void MatMul(const Matrix A, const Matrix B, Matrix C)
{
    // Load A and B to device memory
    Matrix d_A;
    d_A.width = d_A.stride = A.width; d_A.height = A.height;
    size_t size = A.width * A.height * sizeof(float);
    cudaMalloc(&d_A.elements, size);
    cudaMemcpy(d_A.elements, A.elements, size,
               cudaMemcpyHostToDevice);
    Matrix d_B;
    d_B.width = d_B.stride = B.width; d_B.height = B.height;
    size = B.width * B.height * sizeof(float);
    cudaMalloc(&d_B.elements, size);
    cudaMemcpy(d_B.elements, B.elements, size,
    cudaMemcpyHostToDevice);

    // Allocate C in device memory
    Matrix d_C;
    d_C.width = d_C.stride = C.width; d_C.height = C.height;
    size = C.width * C.height * sizeof(float);
    cudaMalloc(&d_C.elements, size);

    // Invoke kernel
    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
    dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
    MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);

    // Read C from device memory
    cudaMemcpy(C.elements, d_C.elements, size,
               cudaMemcpyDeviceToHost);

    // Free device memory
    cudaFree(d_A.elements);
    cudaFree(d_B.elements);
    cudaFree(d_C.elements);
}

Here what I don't understand is how width,stride and height is copied to device memory. Because here cudaMalloc and cudaMemcpy is only for the elements. Is there something I have missed in understanding this.

The kernal codes

__device__ float GetElement(const Matrix A, int row, int col)
{
    return A.elements[row * A.stride + col];
}

// Set a matrix element
__device__ void SetElement(Matrix A, int row, int col,
                           float value)
{
    A.elements[row * A.stride + col] = value;
}

// Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is
// located col sub-matrices to the right and row sub-matrices down
// from the upper-left corner of A
 __device__ Matrix GetSubMatrix(Matrix A, int row, int col) 
{
    Matrix Asub;
    Asub.width    = BLOCK_SIZE;
    Asub.height   = BLOCK_SIZE;
    Asub.stride   = A.stride;
    Asub.elements = &A.elements[A.stride * BLOCK_SIZE * row
                                         + BLOCK_SIZE * col];
    return Asub;
}

Matrix multiplication kernal code

__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C)
{
    // Block row and column
    int blockRow = blockIdx.y;
    int blockCol = blockIdx.x;

    // Each thread block computes one sub-matrix Csub of C
    Matrix Csub = GetSubMatrix(C, blockRow, blockCol);

    // Each thread computes one element of Csub
    // by accumulating results into Cvalue
    float Cvalue = 0;

    // Thread row and column within Csub
    int row = threadIdx.y;
    int col = threadIdx.x;

    // Loop over all the sub-matrices of A and B that are
    // required to compute Csub
    // Multiply each pair of sub-matrices together
    // and accumulate the results
    for (int m = 0; m < (A.width / BLOCK_SIZE); ++m) {

        // Get sub-matrix Asub of A
        Matrix Asub = GetSubMatrix(A, blockRow, m);

        // Get sub-matrix Bsub of B
        Matrix Bsub = GetSubMatrix(B, m, blockCol);

        // Shared memory used to store Asub and Bsub respectively
        __shared__ float As[BLOCK_SIZE][BLOCK_SIZE];
        __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];

        // Load Asub and Bsub from device memory to shared memory
        // Each thread loads one element of each sub-matrix
        As[row][col] = GetElement(Asub, row, col);
        Bs[row][col] = GetElement(Bsub, row, col);

        // Synchronize to make sure the sub-matrices are loaded
        // before starting the computation
        __syncthreads();
        // Multiply Asub and Bsub together
        for (int e = 0; e < BLOCK_SIZE; ++e)
            Cvalue += As[row][e] * Bs[e][col];

        // Synchronize to make sure that the preceding
        // computation is done before loading two new
        // sub-matrices of A and B in the next iteration
        __syncthreads();
    }

    // Write Csub to device memory
    // Each thread writes one element
    SetElement(Csub, row, col, Cvalue);
}
1
Please add the kernel code. I suppose, there is no need to copy width, height and stride explicitly, as they may be figured out by the kernel from the grid and block sizes.Sergey
Here in MatMulKernal A.width variable has been used and its obtained from input matrix parameter to the kernal. But in function matmul there is no memory copying for that variable. There matrix named d_A is created and its width variable is set as we do in normal C code.Aliya Clark

1 Answers

2
votes

For those who wonder, the sample code we talk about is here on Nvidia's CUDA toolkit documentation, in the shared memory topic : CUDA C Programming guide : Shared memory

So, why does this sample work ? Yes, only "elements" array is sent on device side by using cudaMalloc and cudaMemcpy functions. Yes, the matrix dimensions are used within the Kernel on device side without being explicitly copied to device memory with cudaMemcpy.

You need to consider arrays and parameters NOT in the same way. Let me explain how these values are sent to the Kernel.

  1. We declare the matrix on CPU side, all members are uninitialized
  2. We assign the dimensions, the pointer is still uninitialized
  3. We allocate and copy memory on device side with API functions, the pointer is initialized but it targets device memory and cannot be used like a regular host array
  4. We give the matrix as parameter to the kernel. NOT by pointer, but by value.

And that's the trick. The complete structure instance is given as a parameter, and it contains :

  1. Three integers, the dimensions of the matrix
  2. A pointer to the array that contains the matrix data

Giving integers as parameters in the Kernel launch is obviously possible and it works fine. Giving the pointer to an array is possible too : the pointer is copied, meaning we create another pointer pointing to the same zone in memory. If the array we target was on host memory, it would cause errors, but as it has been initialized on device side with API functions, it works fine.