I have a video with yuv420p pixel format. At first I tried to read each frame's bytes of it using pipe and pixel format as rgb24. And I used PIL to make image of it. However, the frames read with format of rgb24 seem to lose a little bit of quality.
Here is the command of reading frame with rgb24 pixel format:
ffmpeg -y -i input.mp4 -vcodec rawvideo -pix_fmt rgb24 -an -r 25 -f rawvideo pipe:1
frame_data = self.process.stdout.read(1920*1080*3)
Then I tried to read it with yuv420p pixel format.
ffmpeg -y -i input.mp4 -vcodec rawvideo -pix_fmt yuv420p -an -r 25 -f rawvideo pipe:1
frame_data = self.process.stdout.read(1920*1080*3/2)
One single frame includes half of the bytes of rgb24 frame. It is 3110400 bytes within a 1920*1080 yuv420p frame. I tossed these data into PIL:
Image.frombytes('YCbCr', (1920, 1080), frame_data)
but PIL raise an error of not enough image data. I looked up the modes that PIL support to write from bytes, none of it is 12_bit pixels. I also tried to transform the yuv data into rgb data, but it took a lot more time than before when is a long video to process.
Am I doing something wrong? Is there any way to write an image with raw yuv data without any transform??
ffmpeg
or Python? What is the actual problem - reading something, (if so, what?), or writing something (if so, what?), or losing quality, or losing speed? Can you share your input file? – Mark Setchell