3
votes

I am learning video encoding & decoding in FFmpeg. I tried the code sample on this page (only the video encoding & decoding part). Here the dummy image being created is in YCbCr format. How do I achieve similar encoding by creating RGB frames? I am stuck at:

Firstly, how to create this RGB dummy frame?

Secondly, how to encode it? Which codec to use? Most of them work with YUV420p only...

EDIT: I have a YCbCr encoder and decoder as given on the this page. The thing is, I have RGB frame-sequence in my database and I need to encode it. But the encoder is for YCbCr. So, I am wondering to convert RGB frames to YCbCr (or YUV420P) somehow and then encode them. At decoding end, I get decoded YCbCr frames and I convert them back to RGB. How to go ahead with it?

I did try the swscontext thing, but the converted frames lose color information and also scaling errors. I thought of doing it manually using two for loops and colorspace conversion formulae but I am not able to access individual pixel of a frame using FFmpeg/libav library! Like in OpenCV we can easily access it with something like: Mat img(x,y) but no such thing here! I am totally a newcomer to this area...

Someone can help me?

Many Thanks!

2
You asked the same question 3 times!szatmary
huffyuv and libx264rgb support rgb24 and bgra. utvideo supports rgb24 and rgba. ffv1 supports bgr0 and bgra. qtrle supports rgb24, rgb555be, and argb. See ffmpeg -h encoder=foo.llogan
@szatmary yes, i wasn't able to post it properly, faced some problem. OK, please check the EDIT, it exaplains my problem better. Thanks!learner

2 Answers

4
votes

The best way to convert will be to use swscale. You can do it manually, but your version will be slower. There is not API to access pixel data in ffmpeg. You must access the buffers directly YUV420P is a planar format, so the first buffer is the Y plane, 1 byte per pixel. U/V planes are 1 byte for 4 pixels. This is because the U and V planes are scaled to 1/4 size of the Y plane under the assumption that the luminance (Y) channel contains the most information.

00250     picture->data[0] = picture_buf;
00251     picture->data[1] = picture->data[0] + size;
00252     picture->data[2] = picture->data[1] + size / 4;

Second, lets look at the colorspace conversion.

void YUVfromRGB(double& Y, double& U, double& V, const double R, const double G, const double B)
{
  Y =  0.257 * R + 0.504 * G + 0.098 * B +  16;
  U = -0.148 * R - 0.291 * G + 0.439 * B + 128;
  V =  0.439 * R - 0.368 * G - 0.071 * B + 128;
}

And plug in some dummy values:

R = 255, G = 255, B = 255
Y =  235

R = 0, G = 0, B = 0
Y = 16

As you can see, the range 0 -> 255 is squished to 16 -> 235. Thus we have shown that there are some colors in the RGB colorspace that do not exist in the (digital) YUV color space. So why do we use YUV? That is the colorspace television uses going all the way back to the 1950 when the color channels (U/V) were added to the existing black and white channel (Y).

read more here: http://en.wikipedia.org/wiki/YCbCr

The scaling errors are you not using swscale correctly. Most likely you do not understand the line stride: http://msdn.microsoft.com/en-us/library/windows/desktop/aa473780(v=vs.85).aspx.

I do not know of any video codecs that operate in the RGB color space. You can convert (slightly lossy) between RGB and YUV using libswscale.

this video will explain: https://xiph.org/video/vid2.shtml

1
votes

I think it is possible to encode raw video. In the sample which you are referring to. You will have to use CODEC_ID_RAWVIDEO to find encoder for raw image in avcodec_find_encoder. In codec context pix_fmt i.e c->pix_fmt, you can use PIX_FMT_RGB24. Finally you'll need to create dummy rgb24 frame instead of YCbCr.