I have the following (snippet) of a kernel.
__global__ void plain(int* geneVec, float* probs, int* nComponents, float* randomNumbers,int *nGenes)
{
int xid = threadIdx.x + (blockDim.x * blockIdx.x);
float* currentProbs= (float*)malloc(sizeof(float)*tmp);
.....
.....
currentProbs[0] = probs[start];
for (k=1;k<nComponents[0]; k++)
{
currentProbs[k] = currentProbs[k-1] + prob;
}
...
...
free(currentProbs);
}
When it's static (even the same sizes) it's very fast, but when CurrentProbs is dynamically allocated (as above) performance is awful.
This question said I could do this inside a kernel: CUDA allocate memory in __device__ function
Here is a related question: Efficiency of Malloc function in CUDA
I was wondering if any other methods have solved this other than the one proposed in the paper? It seems ridiculous that one cannot malloc/free inside a kernel without this sort of penalty.
tmp
come from in your pseudo code? – talonmies