I'm working on an algorithm that does prettymuch the same operation a bunch of times. Since the operation consists of some linear algebra(BLAS), I thourght I would try using the GPU for this.
I've writen my kernel and started pushing kernels on the command queue. Since I don't wanna wait after each call I figures I would try daisy-chaining my calls with events and just start pushing these on the queue.
call kernel1(return event1)
call kernel2(wait for event 1, return event 2)
...
call kernel1000000(vait for event 999999)
Now my question is, does all of this get pushed to the graphic chip of does the driver store the queue? It there a bound on the number of event I can use, or to the length of the command queue, I've looked around but I've not been able to find this.
I'm using atMonitor to check the utilization of my gpu' and its pretty hard to push it above 20%, could this simply be becaurse I'm not able to push the calls out there fast enough? My data is already stored on the GPU and all I'm passing out there is the actual calls.