Question: What does the Libav/FFmpeg decoding pipeline need in order to produce valid presentation timestamps (PTS) in the decoded AVFrames?
I'm decoding an H264 stream received via RTSP. I use Live555 to parse H264 and feed the stream to my LibAV decoder. Decoding and displaying is working fine, except I'm not using timestamp info and get some stuttering.
After getting a frame with avcodec_decode_video2
, the presentation timestamp (PTS) is not set.
I need the PTS in order to find out for how long each frame needs to be displayed, and avoid any stuttering.
Notes on my pipeline
- I get the SPS/PPS information via Live555, I copy these values to my
AVCodecContext->extradata
. - I also send the SPS and PPS to my decoder as NAL units, with the appended {0,0,0,1} startcode.
- Live555 provides presentation timestamps for each packet, these are in most cases not monotonically increasing. The stream contains B-frames.
- My
AVCodecContext->time_base
is not valid, value is 0/2.
Unclear:
- Where exactly should I set the NAL PTS coming from my H264 sink (Live555)? As the AVPacket->dts, pts, none, or both?
- Why is my
time_base
value not valid? Where is this information? - According to the RTP payload spec. It seems that
The RTP timestamp is set to the sampling timestamp of the content. A 90 kHz clock rate MUST be used.
- Does this mean that I must always asume a 1/90000 timebase for the decoder? What if some other value is specified in the SPS?