1
votes

I've been trying to debug my code, as I know something is going wrong in the Kernel, and I've been trying to figure out what specifically. If I try to step into the kernel it seems to completely step over the kernel functions, and will eventually cause an error on quitting:

Single stepping until exit from function dyld_stub_cudaSetupArgument, 
which has no line number information. 
[Launch of CUDA Kernel 0 (incrementArrayOnDevice<<<(3,1,1),(4,1,1)>>>) on 
Device 0] 
[Termination of CUDA Kernel 0 (incrementArrayOnDevice<<<(3,1,1), 
(4,1,1)>>>) on Device 0] 
[Launch of CUDA Kernel 1 (fillinBoth<<<(40,1,1),(1,1,1)>>>) on Device 0] 
[Termination of CUDA Kernel 1 (fillinBoth<<<(40,1,1),(1,1,1)>>>) on Device 0] 
add (below=0x124400, newtip=0x124430, newfork=0x125ac0) at test.cu:1223 

And if I try to break in the Kernel my entire computer crashes and I have to restart it.

I figure there must be something wrong with the way I'm calling the kernel, but I can't figure out what.

The code is rather long, so I'm only including an excerpt of it:

__global__ void fillinOne(seqptr qset, long max) {
    int i, j;
    aas aa;
    int idx = blockIdx.x;
    __shared__ long qs[3];
    if(idx < max) 
    {
        memcpy(qs, qset[idx], sizeof(long[3]));
        for (i = 0; i <= 1; i++)
        {
            for (aa = ala; (long)aa <= (long)stop; aa = (aas)((long)aa + 1))
            {
                if (((1L << ((long)aa)) & qs[i]) != 0)
                {
                    for (j = i + 1; j <= 2; j++)
                        qs[j] |= cudaTranslate[(long)aa - (long)ala][j - i];
                }
            }
        }
    }
}

//Kernel for left!= NULL and rt != NULL

void fillin(node *p, node *left, node *rt)
{

    cudaError_t err = cudaGetLastError();
    size_t stepsize = chars * sizeof(long);
    size_t sitesize = chars * sizeof(sitearray);
    //int i, j;
    if (left == NULL)
    {
        //copy rt->numsteps into p->numsteps--doesn't actually require CUDA, because no computation to do
        memcpy(p->numsteps, rt->numsteps, stepsize);
        checkCUDAError("memcpy");

        //allocate siteset (array of sitearrays) on device
        seqptr qsites;    //as in array of qs's
        cudaMalloc((void **) &qsites, sitesize);
        checkCUDAError("malloc");

        //copy rt->siteset into device array (equivalent to memcpy(qs, rs) but for whole array)
        cudaMemcpy(qsites, rt->siteset, sitesize, cudaMemcpyHostToDevice);
        checkCUDAError("memcpy");

        //do loop in device
        int block_size = 1; //each site operated on independently
        int n_blocks = chars;
        fillinOne <<< n_blocks, block_size>>> (qsites, chars);
        cudaThreadSynchronize();

        //put qset in p->siteset--equivalent to memcpy(p->siteset[m], qs)
        cudaMemcpy(p->siteset, qsites, sitesize, cudaMemcpyDeviceToHost);
        checkCUDAError("memcpy");

       //Cleanup
       cudaFree(qsites);
}

If anyone has any ideas at all, please resond! Thanks in advance!

1

1 Answers

1
votes

I suppose you have a single card configuration. When you are debugging a cuda kernel and you break inside it you effectively put the display driver in pause. That causes what you think is a crash. If you want to use the cuda-gdb with only one graphics card you must use it in command line mode (don't start X or press ctrl-alt-fn from X).

If you have two cards you must run the code in the card not running the display. Use cudaSelectDevice(n).