4
votes

Memory allocation is one of the most time consuming operations in a GPU so I wanted to allocate 2 arrays by calling cudaMalloc once using the following code:

int numElements = 50000;
size_t size = numElements * sizeof(float);

//declarations-initializations
float *d_M = NULL;
err = cudaMalloc((void **)&d_M, 2*size);
//error checking

// Allocate the device input vector A
float *d_A = d_M;


// Allocate the device input vector B
float *d_B = d_M + size;

err = cudaMemcpy(d_A, h_A, size, cudaMemcpyHostToDevice);
//error checking

err = cudaMemcpy(d_B, h_B, size, cudaMemcpyHostToDevice);
//error checking

The original code is inside the samples folder of the cuda toolkit named vectorAdd.cu so you can assume h_A, h_B are properly initiated and the code works without the modification I made.
The result was that the second cudaMemcpy returned an error with message invalid argument.

It seems that the operation "d_M + size" does not return what someone would expect as device memory behaves differently but I don't know how.

Is it possible to make my approach (calling cudaMalloc once to allocate memory for two arrays) work? Any comments/answers on whether this is a good approach are also welcome.

UPDATE
As the answers of Robert and dreamcrash suggested I had to add number of elements (numElements) to the pointer d_M not the size which is the number of bytes. Just for reference there was no observable speedup.

1

1 Answers

4
votes

You just have to replace

float *d_B = d_M + size;

with

float *d_B = d_M + numElements;

This is pointer arithmetic, if you have an array of floats R = [1.0,1.2,3.3,3.4] you can print its first position by doing printf("%f",*R);. And the second position? You just do printf("%f\n",*(++R)); thus r[0] + 1. You do not do r[0] + sizeof(float), like you were doing. When you do r[0] + sizeof(float) you will access the element in the position r[4] since size(float) = 4.

When you declare float *d_B = d_M + numElements; the compiler assumes that d_b will be continuously allocated in memory, and each element will have a size of a float. Hence, you do not need to specify the distance in terms of bytes but rather in terms of elements, the compiler will do the math for you. This approach is more human-friendly since it is more intuitive to express the pointer arithmetic in terms of elements than in terms of bytes. Moreover, it is also more portable, since if the number of bytes of a given type changes based on the underneath architecture, the compiler will handle that for you. Consequently, one's code will not break because one assumed a fixed byte size.


You said that "The result was that the second cudaMemcpy returned an error with message invalid argument":

If you print the number corresponding to this error, it will print 11 and if you check the CUDA API you verify that this error corresponds to :

cudaErrorInvalidValue

This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.

In your example means that float *d_B = d_M + size; is getting out of the range.

You have allocate space for 100000 floats, d_a will start from 0 to 50000, but according to your code d_b will start from numElements * sizeof(float); 50000 * 4 = 200000, since 200000 > 100000 you are getting invalid argument.