6
votes

According to opengl spec 4.0 glDrawPixels is deprecated.

For cuda interoperability it seems best to use "opengl buffer objects". (An alternative could be textures or surfaces but these have caching/concurrency issues and are therefore unusable for my cuda kernel).

I simply want to create a cuda kernel which uses this mapped opengl buffer object and uses it as a "pixel array" or a piece of memory holding pixels, later the buffer is unmapped.

I then want the opengl program to draw the buffer object to the framebuffer. I would like to use an opengl api which is not deprecated.

What other ways/apis are there to draw a buffer object to the frame buffer ? (Also render buffers cannot be used since they probably have same issue as cuda arrays/caching issues, so this rules out framebuffer object/extension ?!?).

Is there a gap/missing functionality in opengl 4.0 now that glDrawPixels is deprecated ? Or is there an alternative ?

2
I don't understand. Your first point states that OpenGL Buffer Objects are recommended for CUDA/OpenCL-OpenGL interop (and they are). Are you looking for other alternatives or is there some issue with that avenue?Ani
The cuda manual mentions three solutions for cuda graphics interoperability with opengl: buffer objects, textures and render buffers. The manual later on mentions textures and render buffers have caching issues with reading/writing by multiple threads. Thus I come to the conclusion myself that the only option left is "buffer objects". The question is now how to "display a buffer object with opengl" ?Skybuck Flying

2 Answers

5
votes

glDrawPixels has been removed from GL 3.2 and above (it is not deprecated. Deprecated means "available but to be removed in the future"). It was removed because it's generally not a fast way to draw pixel data to the screen.

Your best bet is to use glTexSubImage2D to upload it to a texture, then draw that to the screen. Or blit it from the texture with glBlitFramebuffer.

3
votes

It seems the only solution is the following:

  1. Create a "(opengl) pixel buffer object" which is hopefully the same as a "(opengl) (general) buffer object".

  2. Use the pixel buffer object for cuda interoperability. (If not possible, then try a general buffer object"

  3. Then either draw the pixel buffer object to a texture with tex* opengl api calls and then draw the texture to the default framebuffer. (This is probably a double copy, so probably the slowest method.)

  4. Or try and draw the pixel buffer object directly to the framebuffer. I am not sure if this requires a special framebuffer object/extension. (This might be faster if it can be done directly, just one copy).

Additional information that was deleted from seperate answer, but will include it here:

The cuda manual mentions three possible opengl interoperability possibilities:

Buffer objects
Textures
Render buffers

The cuda manual also mentions that textures and render buffers are cached and have concurrency issue if the same pixels is read/written from/to by multiple threads from the same kernel call.

Perhaps my kernel only needs to write the output to each pixel just once, so I might get away with using textures or render buffers. But the concurrency issue makes me a bit nervous... what if it does want to read/write the same pixel multiple times from multiple threads ? I guess in that case I will have to use buffer objects...

Also the buffer objects seem to come in handy since they could also be used as source objects for the cuda kernel... so it has multiple functionalities, input and output. Therefore it's probably the best option to start with for support. However I am not yet sure if buffer object is equal to pixel buffer object. I think so though.. ;)

It also seems a bit easier to implement then 2 and 3, those require extra api calls and types.