1
votes

I couldn't find an answer to this in any documentation I've read about OpenCL so I'm asking: is it possible to control which compute unit executes which algorithm? I want to make one algorithm execute on compute unit 1 and another (different) algorithm execute on compute unit 2 concurrently. I want to be able to define on which compute unit to execute a kernel and possibly on how many processing elements/CUDA cores.

My GPU is Nvidia GeForce GT 525M, it has 2 compute units and 48 CUDA cores per each unit.

1

1 Answers

2
votes

No, that's not possible. Nor would you want to do that. The GPU knows better than you how to schedule the work to make most of the device, you should not (and are not able to) micro-manage that. You can of course influence the scheduling by setting your global and local work group size.

If you have two algorithms, A and B, and both are able to fully utilize the GPU, then there is no reason you should run them in parallel.

Sequentially:
CU 1: AAAAB
CU 2: AAAAB

In parallel:
CU 1: AAAAAAAA
CU 2: BB

Running them in parallel will actually make the total runtime longer if A and B don't have the exact same runtime: runtime is slowest(runtime(A), runtime(B)) vs runtime(A/2) + runtime(B/2).

If this doesn't help you, I suggest you ask a question where you detail your actual use case. What two algorithms you have, what data you have to run them on, what their device usage is, and why you want to run them in parallel.