6
votes

I am working on a Video analytics application where I have to decode an RTSP stream to give IplImage frames which are then fed into my analytics pipeline. Now, the OpenCV VideoCapture structure allows me to extract frames from an RTSP stream(i think it uses ffmpeg to do so) but the performance is not so great. It needs to work in real time.

I also went ahead and wrote my own ffmpeg decoder. But just like OpenCv, performance with RTSP streams is not good. Lots of frames are dropped. However, decoding from a local file works fine.I am still working on refining the code though.

What I need help with is this. First, can I use hardware accelerated decoding here to improve performance? My app is supposed to be cross platform, so I might need to use Directx VA(windows) and VAAPI(linux).If yes,then is there any place where I can learn how to implement hardware acceleration in code especially for ffmpeg decoding of h264?

1
ffmpeg already has VAAPI (and also VDPAU) support I think.sashoalm
Old question I'm trying to answer myself but... While ffmpeg supports hw acceleration through vaapi and qsv, getting it to actually do so requires configuration when using the command line. That config does not seem to exist in the OpenCV API.TheAtomicOption

1 Answers

0
votes

VideoCapture using ffmpeg backend not support hardware accelerated as I know.

I think you can go for VideoCapture with gstreamer as backend, which you can custom pipeline and enable hardware accelerate via vaapi.

I'm using this pipeline:

rtspsrc location=%s latency=0 ! queue ! rtph264depay ! h264parse ! vaapidecodebin ! videorate ! videoscale ! videoconvert ! video/x-raw,width=640,height=480,framerate=5/1,format=BGR ! appsink