1
votes

For my final year project, I have chosen to build a library developers could use to do GPGPU computing with CUDA without having to understand the mechanisms behind the different kernel implementations of the CUDA API (a CUDA wrapper, in other words). This library would likely resemble the openMP library. For those who are unfamiliar with openMP, it is an API that supports multi-platform shared memory multiprocessing programming in C where data layout and decomposition is handled automatically by directives. For example, the API parallelizes each code in blocks:

 long sum = 0, loc_sum = 0;
 /*forks off the threads and starts the work-sharing construct*/
 #pragma omp parallel for private(w,loc_sum) schedule(static,1) 
 {
   for(i = 0; i < N; i++)
     {
       w = i*i;
       loc_sum = loc_sum + w*a[i];
     }
   #pragma omp critical
   sum = sum + loc_sum;
 }
 printf("\n %li",sum);

In my case, I would like to implement the same functionality for CUDA parallel computing on the GPU. Hence, I will need to build a set of compiler directives, library routines, and environment variables that influence run-time behavior. Every call in CUDA must be hidden from the programmer.

Since CUDA is an SIMD architecture, I know there are many factors that have to be accounted for, especially on the dependancy between iterations. But for now I suppose that the programmer knows the limitations of GPGPU computing.

Now, here is where I need your help. Could anyone one give me any advice on where to start to build such a library? Also, does anyone have any good tutorials that could help me deal with compiler directives or environnement variables? Or, does anyone know any other library that does a similar task and from which I could get a good documentation?

And most importantly, do you think this is a project that can be done in about 1200 hours? I am already a bit familiar with GPGPU and CUDA, but building such a library is new to me.

2
What's wrong with OpenACC? nvidia.com/object/openacc-gpu-directives.htmlngimel

2 Answers

1
votes

This isn't so much writing a library as rewriting part of the compiler. Neither GCC nor Visual Studio let you define your own pragmas, for one thing, and you'd need to play nicely with the built-in optimizer.

Honestly, it seems to me that the actual GPGPU part of this is the easy part.

If you want to see how they did OpenMP in GCC, I suggest looking at the GOMP project history.

1
votes

This is a bit subjective, but this sounds like a very challenging project. It takes a fair amount of thought and planning to structure a problem well enough to make the data transfer from host to gpu pay off, and it only makes sense for a subset of problems.

As far as existing projects that do something similar, there are simple wrappers like PyCUDA and PyOpenCL that wrap small bits of GPU functionality like matrix math. The one that is perhaps closest is theano, which is focused on fairly mathematical computations, but which does a good job abstracting away the GPU component.