4
votes

The CUBLAS documentation mentions that we need synchronization before reading a scalar result:

"Also, the few functions that return a scalar result, such as amax(), amin, asum(), rotg(), rotmg(), dot() and nrm2(), return the resulting value by reference on the host or the device. Notice that even though these functions return immediately, similarly to matrix and vector results, the scalar result is ready only when execution of the routine on the GPU completes. This requires proper synchronization in order to read the result from the host."

Does it mean we should always do synchronization before reading the scalar result from host, even if we only use a single stream? I have been looking for an example on NVIDIA's CUDA documentation but could not find one.

But in the conjugate gradient example provided by NVIDIA, there are the following codes

while (r1 > tol*tol && k <= max_iter)
{
    if (k > 1)
    {
        b = r1 / r0;
        cublasStatus = cublasSscal(cublasHandle, N, &b, d_p, 1);
        cublasStatus = cublasSaxpy(cublasHandle, N, &alpha, d_r, 1, d_p, 1);
    }
    else
    {
        cublasStatus = cublasScopy(cublasHandle, N, d_r, 1, d_p, 1);
    }

    cusparseScsrmv(cusparseHandle, CUSPARSE_OPERATION_NON_TRANSPOSE, N, N, nz, &alpha, descr, d_val, d_row, d_col, d_p, &beta, d_Ax);
    cublasStatus = cublasSdot(cublasHandle, N, d_p, 1, d_Ax, 1, &dot);
    a = r1 / dot;

    cublasStatus = cublasSaxpy(cublasHandle, N, &a, d_p, 1, d_x, 1);
    na = -a;
    cublasStatus = cublasSaxpy(cublasHandle, N, &na, d_Ax, 1, d_r, 1);

    r0 = r1;
    cublasStatus = cublasSdot(cublasHandle, N, d_r, 1, d_r, 1, &r1);
    cudaThreadSynchronize();
    printf("iteration = %3d, residual = %e\n", k, sqrt(r1));
    k++;
}

Here there is a cudaThreadSynchronize() call before the end of the while loop. Is it for the cublasSdot calls? But there are two cublasSdot calls in the loop. Why is there a cudaThreadSynchronize() after the second cublasSdot, but not the first one?

EDIT: To see what is going on, I used the following codes to compare the dot product results with and without synchronization.

int main(int argc, char **argv)
{
    /* Generate a large vector */
    int N = 1024 * 1024 * 512;

    double *x_cpu = (double *)malloc(sizeof(double)*N);
    for (int i = 0; i < N; i++)
    {
        x_cpu[i] = double(rand()) / RAND_MAX;
    }


    double *x_gpu;
    cudaMalloc((void **)&x_gpu, N*sizeof(double));
    cudaMemcpy(x_gpu, x_cpu, N*sizeof(double), cudaMemcpyHostToDevice);

    /* Get handle to the CUBLAS context */
    cublasHandle_t cublasHandle = 0;
    cublasStatus_t cublasStatus;
    cublasStatus = cublasCreate(&cublasHandle);

    int M = 1000;
    std::vector<double> x_dot_vec(M, 0.0);
    double *x_dot_ptr = &(x_dot_vec[0]);

    std::cout << "Begin Launching CUBLAS........" << std::endl;

    for(int j = 0; j < M; j++){
        cublasDdot(cublasHandle, N, x_gpu, 1, x_gpu, 1, x_dot_ptr + j);
    }

    std::cout << "End Launching CUBLAS........." << std::endl;

    double old_value = x_dot_vec.back();
    cudaDeviceSynchronize();
    double new_value = x_dot_vec.back();
    std::cout << "Old Value: " << old_value << ",   New Value: " << new_value << std::endl;

    free(x_cpu);
    cudaFree(x_gpu);

    return 0;
}

Here the idea is that we create a very large vector and computes its dot product using cublas for many times, and write the return values into an array on host. Right after launching all the cublas functions, we read the last element of the result array without synchronization. If the cublasDdot call is really non-blocking, then the last element should not be written yet. Then we do synchronization and read the last element again. This time it should have stored the correct dot product, hopefully giving us a different value than the one we get without synchronization. When I run this code, however, the two values are always the same. And it takes a long time between the output before and after cublas calls. It looks like the cublasDdot is actually blocking, unlike what is said in the CUBLAS documentation.

I also try the following version, where the results are output to a device array instead of a host array. But the results look the same.

int main(int argc, char **argv)
{
    /* Generate a large vector */
    int N = 1024 * 1024 * 512;

    double *x_cpu = (double *)malloc(sizeof(double)*N);
    for (int i = 0; i < N; i++)
    {
        x_cpu[i] = double(rand()) / RAND_MAX;
    }


    double *x_gpu;
    cudaMalloc((void **)&x_gpu, N*sizeof(double));
    cudaMemcpy(x_gpu, x_cpu, N*sizeof(double), cudaMemcpyHostToDevice);

    /* Get handle to the CUBLAS context */
    cublasHandle_t cublasHandle = 0;
    cublasStatus_t cublasStatus;
    cublasStatus = cublasCreate(&cublasHandle);
    cublasSetPointerMode(cublasHandle, CUBLAS_POINTER_MODE_DEVICE);

    int M = 1000;
    std::vector<double> x_dot_vec(M, 0.0);
    double *x_dot_ptr = &(x_dot_vec[0]);
    double *dot_gpu;
    cudaMalloc((void **)&dot_gpu, sizeof(double) * M);
    cudaMemcpy(dot_gpu, x_dot_ptr, M * sizeof(double), cudaMemcpyHostToDevice);
    double old_value, new_value;

    std::cout << "Begin Launching CUBLAS........" << std::endl;

    for(int j = 0; j < M; j++){
        cublasDdot(cublasHandle, N, x_gpu, 1, x_gpu, 1, dot_gpu + j);
    }

    std::cout << "End Launching CUBLAS........." << std::endl;

    cudaMemcpy(&old_value, dot_gpu + M - 1, sizeof(double), cudaMemcpyDeviceToHost);
    cudaDeviceSynchronize();
    cudaMemcpy(&new_value, dot_gpu + M - 1, sizeof(double), cudaMemcpyDeviceToHost);
    std::cout << "Old Value: " << old_value << ",   New Value: " << new_value << std::endl;

    free(x_cpu);
    cudaFree(x_gpu);
    cudaFree(dot_gpu);

    return 0;
}
1

1 Answers

0
votes

I would consider that code to be incorrect. As you have noted, in the CUBLAS V2 API, cublasSdot is a non-blocking call, and theoretically a synchronization primitive is required before the result can be used in host memory.

The first cublasSdot call should also have a synchronization point, something like:

...
cusparseScsrmv(cusparseHandle, CUSPARSE_OPERATION_NON_TRANSPOSE, N, N, nz, &alpha, descr, d_val, d_row, d_col, d_p, &beta, d_Ax);
cublasStatus = cublasSdot(cublasHandle, N, d_p, 1, d_Ax, 1, &dot);
cudaDeviceSynchronize();
a = r1 / dot;
...

That example code is also using the long deprecated cudaThreadSynchronize API call. I would recommend raising a bug report with NVIDIA about both items.