3
votes

My Mac Pro (OSX 10.7) has two GPUs. The system information app shows the following detail for Graphics/Displays:

    ATI Radeon HD 5770:
       Bus: PCIe
       Slot:    Slot-1
       Vendor:  ATI (0x1002)
       Device ID:   0x68b8
       ...

    ATI Radeon HD 5770:
      Bus:  PCIe
      Slot: Slot-2
      Device ID:    0x68b8
      Displays:
        LED Cinema Display:
          Main Display: Yes
          ...

I want to use the GPU not attached to the display for computation in a Java application with low-level bindings to OpenCL 1.1. How can I programmatically discover the GPU device in slot-1?

From my log file showing the results of device info queries:

... Device ATI Radeon HD 5770[AMD]: vendorId[1021b00] ...
... Device ATI Radeon HD 5770[AMD]: vendorId[2021b00] ...

Related post: How to match OpenCL devices with a specific GPU given PCI vendor, device and bus IDs in a multi-GPU system?

2
What API are you using for OpenCl support?user978122
@user978122 LWJGL 2.8.2 bindings with OpenCL 1.1wjohnson

2 Answers

5
votes

It sounds like you know that you can get the devices in your system with clGetDeviceIds, and that you can query them for things like CL_DEVICE_NAME using clGetDeviceInfo.

Unfortunately, I don't think the OpenCL API currently has a cross-platform way to identify the compute device currently used to drive the display. Most of the time, folks want to get this device so that they can do faster OpenGL / OpenCL sharing by using the same device. In your case, you want to know what device is driving the display in order to ignore it.

However, there is a way to do this that is specific to the Macintosh. Since you mentioned that you're on a Mac, here's process:

  1. Create an OpenCL context with your GPU devices.
  2. Ask the system for the current OpenGL context.
  3. Ask OpenCL via an extension (from cl_gl_ext.h) which device is driving the display.
  4. Use the vendor id to ignore that device.

Here's a complete program which will do this on a Mac. I'm running Lion.

// compile with:
// clang -o test test.c -framework GLUT -framework OpenGL -framework OpenCL
#include <GLUT/glut.h>
#include <OpenGL/OpenGL.h>
#include <OpenGL/CGLDevice.h>
#include <OpenCL/opencl.h>
#include <OpenCL/cl_gl_ext.h>
#include <stdio.h>

int main (int argc, char const *argv[]) {
  int i;
  cl_int error;

  // We need to do *something* to create a GL context:
  glutInit( &argc, (char**)argv );
  glutInitDisplayMode( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH );
  glutCreateWindow( "OpenCL <-> OpenGL Test" );

  // So we can ask CGL for it:
  CGLContextObj gl_context = CGLGetCurrentContext();

  CGLShareGroupObj share_group = CGLGetShareGroup(gl_context);
  cl_context_properties properties[] = { CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE, 
    (intptr_t)share_group, 0 };
  cl_context context = clCreateContext(properties, 0, NULL, 0, 0, &error);

  // And now we can ask OpenCL which particular device is being used by
  // OpenGL to do the rendering, currently:
  cl_device_id renderer;
  clGetGLContextInfoAPPLE(context, gl_context, 
    CL_CGL_DEVICE_FOR_CURRENT_VIRTUAL_SCREEN_APPLE, sizeof(renderer), 
    &renderer, NULL);

  cl_uint id_in_use;
  clGetDeviceInfo(renderer, CL_DEVICE_VENDOR_ID, sizeof(cl_uint), 
    &id_in_use, NULL);

  // Determine the number of devices:
  size_t size;
  cl_uint num_devices;
  clGetContextInfo(context, CL_CONTEXT_DEVICES, 0, NULL, &size);

  num_devices = size / sizeof(cl_device_id);
  cl_device_id devices[num_devices];
  clGetContextInfo(context, CL_CONTEXT_DEVICES, size, devices, NULL);

  // Now we have everything we need to use the device that IS NOT doing
  // rendering to the screen for our compute:
  char buf[128];
  cl_uint vendor_id;  
  for (i = 0; i < num_devices; i++) {
    clGetDeviceInfo(devices[i], CL_DEVICE_NAME, 128, buf, NULL);
    clGetDeviceInfo(devices[i], CL_DEVICE_VENDOR_ID, sizeof(cl_uint), &vendor_id, NULL);
    fprintf(stdout, "%s (%x)", buf, vendor_id);
    if (vendor_id == id_in_use) {
      fprintf(stdout, " [ in use by GL for display ]\n");
    } else {
      fprintf(stdout, " [ totally free for compute! ]\n");
    }      
  }

  clReleaseContext(context);
  return 0;
}

When I try this on my iMac (one GPU), I get:

ATI Radeon HD 6970M (1021b00) [ in use by GL for display ]

But when I try this on a remote box via ssh:

ATI Radeon HD 5770 (1021b00) [ totally free for compute! ]

Show me your output! I don't have a two GPU box :) On my friend's multi-GPU box, running Mac OS 10.7.2:

GeForce GTX 285 (1022600) [ totally free for compute! ]
GeForce GT 120 (2022600) [ in use by GL for display ] 

Note that there might be a better way than GLUT to get GL up and running. But GLUT's not so bad -- you don't even have to show a window on the screen. This program doesn't.

1
votes

You might be interested in my library: https://github.com/nbigaouette/oclutils/

I developed that library to manage multiple OpenCL device on a machine. It will sort the list of available devices automatically according to the number of "max_compute_unit" reported. This way, on my machine, it will always pick up one of the two powerful Nvidia GTX 580 instead of the (crappy) GT220 which drives the display.

It supports nvidia (GPU only), amd (CPU and/or GPU), intel (CPU only) and apple (CPU and/or GPU) platforms.

Note that it won't be able to distinguish which of two identical cards is running the display so it's not a perfect solution for your problem. I might try to integrate James' solution as this is something I am interested too.