I am using live555 and ffmpeg libraries to get and decode RTP H264 stream from server; Video stream was encoded by ffmpeg, using Baseline profile and
x264_param_default_preset(m_params, "veryfast", "zerolatency")
I read this topic and add SPS and PPS data in the every frame, which I receive from network;
void ClientSink::NewFrameHandler(unsigned frameSize, unsigned numTruncatedBytes,
timeval presentationTime, unsigned durationInMicroseconds)
{
...
EncodedFrame tmp;
tmp.m_frame = std::vector<unsigned char>(m_tempBuffer.data(), m_tempBuffer.data() + frameSize);
tmp.m_duration = durationInMicroseconds;
tmp.m_pts = presentationTime;
//Add SPS and PPS data into the frame; TODO: some devices may send SPS and PPs data already into frame;
tmp.m_frame.insert(tmp.m_frame.begin(), m_spsPpsData.cbegin(), m_spsPpsData.cend());
emit newEncodedFrame( SharedEncodedFrame(tmp) );
m_frameCounter++;
this->continuePlaying();
}
And this frames I receive in the decoder.
bool H264Decoder::decodeFrame(SharedEncodedFrame orig_frame)
{
...
while(m_packet.size > 0)
{
int got_picture;
int len = avcodec_decode_video2(m_decoderContext, m_picture, &got_picture, &m_packet);
if (len < 0)
{
emit criticalError(QString("Decoding error"));
return false;
}
if (got_picture)
{
std::vector<unsigned char> result;
this->storePicture(result);
if ( m_picture->format == AVPixelFormat::AV_PIX_FMT_YUV420P )
{
//QImage img = QImage(result.data(), m_picture->width, m_picture->height, QImage::Format_RGB888);
Frame_t result_rgb;
if (!convert_yuv420p_to_rgb32(result, m_picture->width, m_picture->height, result_rgb))
{
emit criticalError( QString("Failed to convert YUV420p image into rgb32; can't create QImage!"));
return false;
}
unsigned char* copy_img = new unsigned char[result_rgb.size()];
//this needed because QImage shared buffer, which used, and it will crash, if i use this qimage after result_rgb deleting
std::copy(result_rgb.cbegin(), result_rgb.cend(), copy_img);
QImage img = QImage(copy_img, m_picture->width, m_picture->height, QImage::Format_RGB32,
[](void* array)
{
delete[] array;
}, copy_img);
img.save(QString("123.bmp"));
emit newDecodedFrame(img);
}
avcodec_decode_video2 decode frames without any error message, but decoded frames, after converting it (from yuv420p into rgb32) is invalid. Example of image available on this link
Do you have any ideas what I make wrong?