1
votes

I have a openGL application which is rendering data into a rgba texture. I want to encode and stream it using gstreamer framework (using nvenc plugin for h264 encoding).

I was looking through the documentation to solve these problems:

  1. How to export the existing openGL context of the app to nvenc element.
  2. How to pass the texture id to source from?
  3. How will synchronization work. i.e nvenc has to wait for rendering to finish and similarly app has to wait for nvenc to finish reading from the texture. I am assuming it would either involve using sync fences or glMemoryBarriers.

Any sample code would really be really helpful.

I do want to avoid any texture copies to cpu memory. Nvidia's NVENC sdk mentions that it uses CUDA context to make the calls, and an openGL texture can be imported into CUDA context using cudaGraphicsGLRegisterImage call. So my expectation is that from app to video encoded frame can be done without any copies.

1

1 Answers

1
votes

I know this is an old question, but just in case someone else hit this problem...

  1. If your NVENC calls and OpenGL app is in the same thread, you don't need to do anything with the context.

    If not, you should probably create two OpenGL contexts, one for rendering, one for encoding. The two contexts should share objects as explained in https://www.khronos.org/opengl/wiki/OpenGL_Context.

    You can also create only one context and transfer the context between threads by making it "current" to the thread that's accessing the OpenGL objects, but I found the two contexts way easier.

  2. Texture id is an integer, just pass it.

  3. NvEncMapInputResource "provides synchronization guarantee that any graphics or compute work submitted on the input buffer is completed before the buffer is used for encoding". NvEncEncodePicture has "synchronous mode of encoding".

  4. As of today, NVENC supports OpenGL encode device on linux, so you don't have to register OpenGL texture in CUDA. NVENC can directly access the OpenGL texture, so there's no memory copy on the client side.

    If you're working on windows, I believe you can create a CUDA encode device, then get a CUarray from an OpenGL texture, and NVENC can access the CUarray.

Sample code of OpenGL and CUDA encode device can be found in samples of NVENC SDK.