It sounds like you know that you can get the devices in your system with clGetDeviceIds
, and that you can query them for things like CL_DEVICE_NAME
using clGetDeviceInfo
.
Unfortunately, I don't think the OpenCL API currently has a cross-platform way to identify the compute device currently used to drive the display. Most of the time, folks want to get this device so that they can do faster OpenGL / OpenCL sharing by using the same device. In your case, you want to know what device is driving the display in order to ignore it.
However, there is a way to do this that is specific to the Macintosh. Since you mentioned that you're on a Mac, here's process:
- Create an OpenCL context with your GPU devices.
- Ask the system for the current OpenGL context.
- Ask OpenCL via an extension (from cl_gl_ext.h) which device is driving the display.
- Use the vendor id to ignore that device.
Here's a complete program which will do this on a Mac. I'm running Lion.
// compile with:
// clang -o test test.c -framework GLUT -framework OpenGL -framework OpenCL
#include <GLUT/glut.h>
#include <OpenGL/OpenGL.h>
#include <OpenGL/CGLDevice.h>
#include <OpenCL/opencl.h>
#include <OpenCL/cl_gl_ext.h>
#include <stdio.h>
int main (int argc, char const *argv[]) {
int i;
cl_int error;
// We need to do *something* to create a GL context:
glutInit( &argc, (char**)argv );
glutInitDisplayMode( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH );
glutCreateWindow( "OpenCL <-> OpenGL Test" );
// So we can ask CGL for it:
CGLContextObj gl_context = CGLGetCurrentContext();
CGLShareGroupObj share_group = CGLGetShareGroup(gl_context);
cl_context_properties properties[] = { CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE,
(intptr_t)share_group, 0 };
cl_context context = clCreateContext(properties, 0, NULL, 0, 0, &error);
// And now we can ask OpenCL which particular device is being used by
// OpenGL to do the rendering, currently:
cl_device_id renderer;
clGetGLContextInfoAPPLE(context, gl_context,
CL_CGL_DEVICE_FOR_CURRENT_VIRTUAL_SCREEN_APPLE, sizeof(renderer),
&renderer, NULL);
cl_uint id_in_use;
clGetDeviceInfo(renderer, CL_DEVICE_VENDOR_ID, sizeof(cl_uint),
&id_in_use, NULL);
// Determine the number of devices:
size_t size;
cl_uint num_devices;
clGetContextInfo(context, CL_CONTEXT_DEVICES, 0, NULL, &size);
num_devices = size / sizeof(cl_device_id);
cl_device_id devices[num_devices];
clGetContextInfo(context, CL_CONTEXT_DEVICES, size, devices, NULL);
// Now we have everything we need to use the device that IS NOT doing
// rendering to the screen for our compute:
char buf[128];
cl_uint vendor_id;
for (i = 0; i < num_devices; i++) {
clGetDeviceInfo(devices[i], CL_DEVICE_NAME, 128, buf, NULL);
clGetDeviceInfo(devices[i], CL_DEVICE_VENDOR_ID, sizeof(cl_uint), &vendor_id, NULL);
fprintf(stdout, "%s (%x)", buf, vendor_id);
if (vendor_id == id_in_use) {
fprintf(stdout, " [ in use by GL for display ]\n");
} else {
fprintf(stdout, " [ totally free for compute! ]\n");
}
}
clReleaseContext(context);
return 0;
}
When I try this on my iMac (one GPU), I get:
ATI Radeon HD 6970M (1021b00) [ in use by GL for display ]
But when I try this on a remote box via ssh:
ATI Radeon HD 5770 (1021b00) [ totally free for compute! ]
Show me your output! I don't have a two GPU box :)
On my friend's multi-GPU box, running Mac OS 10.7.2:
GeForce GTX 285 (1022600) [ totally free for compute! ]
GeForce GT 120 (2022600) [ in use by GL for display ]
Note that there might be a better way than GLUT to get GL up and running. But GLUT's not so bad -- you don't even have to show a window on the screen. This program doesn't.