I have a openGL application which is rendering data into a rgba texture. I want to encode and stream it using gstreamer framework (using nvenc plugin for h264 encoding).
I was looking through the documentation to solve these problems:
- How to export the existing openGL context of the app to nvenc element.
- How to pass the texture id to source from?
- How will synchronization work. i.e nvenc has to wait for rendering to finish and similarly app has to wait for nvenc to finish reading from the texture. I am assuming it would either involve using sync fences or glMemoryBarriers.
Any sample code would really be really helpful.
I do want to avoid any texture copies to cpu memory. Nvidia's NVENC sdk mentions that it uses CUDA context to make the calls, and an openGL texture can be imported into CUDA context using cudaGraphicsGLRegisterImage call. So my expectation is that from app to video encoded frame can be done without any copies.