For my final year project, I have chosen to build a library developers could use to do GPGPU computing with CUDA without having to understand the mechanisms behind the different kernel implementations of the CUDA API (a CUDA wrapper, in other words). This library would likely resemble the openMP library. For those who are unfamiliar with openMP, it is an API that supports multi-platform shared memory multiprocessing programming in C where data layout and decomposition is handled automatically by directives. For example, the API parallelizes each code in blocks:
long sum = 0, loc_sum = 0;
/*forks off the threads and starts the work-sharing construct*/
#pragma omp parallel for private(w,loc_sum) schedule(static,1)
{
for(i = 0; i < N; i++)
{
w = i*i;
loc_sum = loc_sum + w*a[i];
}
#pragma omp critical
sum = sum + loc_sum;
}
printf("\n %li",sum);
In my case, I would like to implement the same functionality for CUDA parallel computing on the GPU. Hence, I will need to build a set of compiler directives, library routines, and environment variables that influence run-time behavior. Every call in CUDA must be hidden from the programmer.
Since CUDA is an SIMD architecture, I know there are many factors that have to be accounted for, especially on the dependancy between iterations. But for now I suppose that the programmer knows the limitations of GPGPU computing.
Now, here is where I need your help. Could anyone one give me any advice on where to start to build such a library? Also, does anyone have any good tutorials that could help me deal with compiler directives or environnement variables? Or, does anyone know any other library that does a similar task and from which I could get a good documentation?
And most importantly, do you think this is a project that can be done in about 1200 hours? I am already a bit familiar with GPGPU and CUDA, but building such a library is new to me.