0
votes

My intention is to use n host threads to create n streams concurrently on a NVidia Tesla C2050. The kernel is a simple vector multiplication...I am dividing the data equally amongst n streams, and each stream would have concurrent execution/data transfer going on.

The data is floating point, I am sometimes getting CPU/GPU sums as equal, and sometimes they are wide apart...I guess this could be attributed to loss of synchronization constructs on my code, for my case, but also I don't think any synch constructs between streams is necessary, because I want every CPU to have a unique stream to control, and I do not care about asynchronous data copy and kernel execution within a thread.

Following is the code each thread runs:

//every thread would run this method in conjunction

static CUT_THREADPROC solverThread(TGPUplan *plan)
{

    //Allocate memory
    cutilSafeCall( cudaMalloc((void**)&plan->d_Data, plan->dataN * sizeof(float)) );

    //Copy input data from CPU
    cutilSafeCall( cudaMemcpyAsync((void *)plan->d_Data, (void *)plan->h_Data, plan->dataN * sizeof(float), cudaMemcpyHostToDevice, plan->stream) );
    //to make cudaMemcpyAsync blocking
    cudaStreamSynchronize( plan->stream );

    //launch
    launch_simpleKernel( plan->d_Data, BLOCK_N, THREAD_N, plan->stream);
    cutilCheckMsg("simpleKernel() execution failed.\n");

    cudaStreamSynchronize(plan->stream);

    //Read back GPU results
    cutilSafeCall( cudaMemcpyAsync(plan->h_Data, plan->d_Data, plan->dataN * sizeof(float), cudaMemcpyDeviceToHost, plan->stream) );
    //to make the cudaMemcpyAsync blocking...               
    cudaStreamSynchronize(plan->stream);

    cutilSafeCall( cudaFree(plan->d_Data) );

    CUT_THREADEND;
}

And creation of multiple threads and calling the above function:

    for(i = 0; i < nkernels; i++)
            threadID[i] = cutStartThread((CUT_THREADROUTINE)solverThread, &plan[i]);

    printf("main(): waiting for GPU results...\n");
    cutWaitForThreads(threadID, nkernels);

I took this strategy from one of the CUDA Code SDK samples. As I've said before, this code work sometimes, and other time it gives wayward results. I need help with fixing this code...

2

2 Answers

2
votes

first off I am not an expert by any stretch of the imagination, just from my experience.

I don't see why this needs multiple host threads. It seems like you're managing one device and passing it multiple streams. The way I've seen this done (pseudocode)

{
create a handle

allocate an array of streams equal to the number of streams you want

for(int n=0;n<NUM_STREAMS;n++)
{
   cudaStreamCreate(&streamArray[n]);
}

}

From there you can just pass the streams in your array to the various asynchronous calls (cudaMemcpyAsync(), kernel streams, etc.) and the device manages the rest. I've had weird scalability issues with multiple streams (don't try to make 10k streams, I run into problems around 4-8 on a GTX460), so don't be surprised if you run into those. Best of luck,

John

1
votes

My bet is that

BLOCK_N, THREAD_N

, don't cover the exact size of the array you are passing. Please provide the code for initializing the streams and the size of those buffers.

As a side note, Streams are useful for overlapping computation with memory transfer. Synching the stream after each async call is not useful at all.