3
votes

I am struggling to apply ffmpeg for remote control of autonomous truck.

There are 3 video streams from cameras in local network, described with .sdp files like this one (MJPEG over RTP, correct me if I'm wrong): m=video 50910 RTP/AVP 26 c=IN IP4 192.168.1.91

I want to make a single video stream from three pictures combined using this:

ffmpeg -hide_banner -protocol_whitelist "rtp,file,udp" -i "cam1.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam2.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam3.sdp" \
-filter_complex "\
nullsrc=size=1800x600 [back]; \
[back][b]overlay=1000[tmp1]; \
[tmp1][c]overlay=600[tmp2]; \
[tmp2][a]overlay" \
-vcodec libx264 \
-crf 25 -maxrate 4M -bufsize 8M -r 30 -preset ultrafast -tune zerolatency \
-f mpegts udp://localhost:1234

When i launch this, the ffmpeg starts sending errors about RTP packets being lost. In the output the fps of every camera seems unstable, so this is unacceptable. I am able to launch ffplay or mplayer on three cameras simultaneously. And I also can make such stream using pre-recorded videofile as input. So it seems like the ffmpeg just can't read three UDP streams so fast. The cameras are streaming at 10 Mbit/s, 800x600, 30 fps MJPEG, and those are the minimal settings I can afford, but the cameras can do much more.

So I tried to do something to the size of UDP buffer. Well, there is a possibility to setup buffer_size and fifo_size for a UDP stream, but no such option for a stream described with an .sdp file. Even though I've found a way to run the stream with rtp://-like URL, but it doesn't seem to pass the arguments after '?' to the UDP.

My next idea was to launch multiple ffmpeg instances and receive the streams separately, process them and re-stream to another instance, which would consume any kind of stream, stitch them together and send out. That would actually be a good setup, since I need to filter the streams individually, crop them, lenscorrect, rotate, and maybe a large -filter_complex on a single ffmpeg instance would not handle all the streams. And I'm going to have 3 more of them.

I tried to implement this setup using 3 fifopipe or using 3 udp://localhost:124x internal streams. None of the approaches solved my problem, but the separated ffmpeg instances seem to be able to receive three streams simultaneously. I was able to open the repeated stream through pipes and through UDP via mplayer or ffplay. They are completely synced and live The stitching still fails miserably. The pipes got me a few seconds delays for cameras, and after stitching streams were choppy and out of sync. The udp:// got me a smooth video stream as a result, but one camera has ~5 sec delay, and the others have 15 and 25.

This smells like buffer. Changing the fifo_size and buffer_size doesn't seem to influence much. I tried to add local time timestamp in re-streamer instances - this is how I found the 5, 15, 25sec delays. I tried to add frame timestamp in stitcher instance - they come out completely synced. So setpts=PTS-STARTPTS doesn't work either.

So, the buffer happens between the udp:// socket and the -filter_complex input. How do I get rid of it? How do you like my workaround? Am I doing it completely wrong?

1

1 Answers

0
votes

Packet loss can be mitigated by prepending -thread_queue_size 1024 before each input (it is a per-input command).

Did you find a way to improve inter camera sync? I have not, though I believe the approach, absent being about to extract NTP data from the streams, is to treat each frame as live with no temporal buffering. No luck yet with that approach though.