1
votes

I am trying to write a simple YUV video player using python. After some initial study, I thought I could use PySide and started with it. As a first step, I have taken the following approach without consideration for real-time performance. Read YUV buffer (420 planar) -> convert the YUV image to RGB (32bit format) - > call PySide utilities for display. The basic problem that I have with my simple program is that I am able to get only the first frame to display and the rest are not displayed, eventhough the paint event seems to be happening according to the counter in the (below) code. I would appreciate any comments to understand (i) any mistakes and lack of understanding from my side regarding painting/repainting at regular intervals on QLabel/QWidget. (ii) Any pointers to Python based video players/display from YUV or RGB source.

    #!/usr/bin/python

import sys
from PySide.QtCore import *
from PySide.QtGui import *
import array
import numpy as np

class VideoWin(QWidget):
    def __init__(self, width, height, f_yuv):
        QWidget.__init__(self)
        self.width = width
        self.height = height
        self.f_yuv = f_yuv
        self.setWindowTitle('Video Window')
        self.setGeometry(10, 10, width, height)
        self.display_counter = 0
        self.img = QImage(width, height, QImage.Format_ARGB32)
        #qApp.processEvents()

    def getImageBuf(self):
        return self.img.bits()

    def paintEvent(self, e):
        painter = QPainter(self)
        self.display_counter += 1
        painter.drawImage(QPoint(0, 0), self.img)
    def timerSlot(self):
        print "In timer"
        yuv = array.array('B')
        pix = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.getImageBuf())

        for i in range(0,self.height):
            for j in range(0, self.width):
                pix[i, j] = 0

        for k in range (0, 10):
            #qApp.processEvents()
            yuv.fromfile(self.f_yuv, 3*self.width*self.height/2)
            for i in range(0, self.height):
                for j in range(0, self.width):
                    Y_val = yuv[(i*self.width)+j]
                    U_val = yuv[self.width*self.height + ((i/2)*(self.width/2))+(j/2)]
                    V_val = yuv[self.width*self.height + self.width*self.height/4 + ((i/2)*(self.width/2))+(j/2)]
                    C = Y_val - 16
                    D = U_val - 128
                    E = V_val - 128
                    R = (( 298 * C           + 409 * E + 128) >> 8)
                    G = (( 298 * C - 100 * D - 208 * E + 128) >> 8)
                    B = (( 298 * C + 516 * D           + 128) >> 8)
                    if R > 255:
                        R = 255
                    if G > 255:
                        G = 255
                    if B > 255:
                        B = 255

                    assert(int(R) < 256)
                    pix[i, j] = (255 << 24 | ((int(R) % 256 )<< 16) | ((int(G) % 256 ) << 8) | (int(B) % 256))

            self.repaint()
            print "videowin.display_counter = %d" % videowin.display_counter


if __name__ == "__main__":
    try:
        yuv_file_name = sys.argv[1]
        width = int(sys.argv[2])
        height = int(sys.argv[3])
        f_yuv = open(yuv_file_name, "rb")

        videoApp = QApplication(sys.argv)

        videowin = VideoWin(width, height, f_yuv)

        timer = QTimer()
        timer.singleShot(100, videowin.timerSlot)

        videowin.show()
        videoApp.exec_()


        sys.exit(0)
    except NameError:
        print("Name Error : ", sys.exc_info()[1])
    except SystemExit:
        print("Closing Window...")
    except Exception:
        print(sys.exc_info()[1])

I have tried a second approach where I have tried a combination of creating a Signal object which "emits" each decoded RGB image (converted from YUV)as a signal which is caught by the "updateFrame" method in the displaying class which displays the received RGB buffer/frame using QPainter.drawImage(...) method. YUV-to-RGB decode--->Signal(Image buffer) --->updateFrame ---> QPainter.drawImage(...) This also displays only the first image alone although the slot which catches the signal (getting the image) shows that it is called as many times as the signal is sent by the YUV->RGB converter/decoder. I have also tried running the YUV->RGB converter and Video display (calling drawImage) in seperate threads, but the result is the same.

Please note that in both the cases, I am writing the RGB pixel values directly into the bit buffer of the QImage object which is part of the VideoWin class in the code shown (NOTE: the code line pix = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=videowin.getImageBuf()) which gets the img.bits() buffer of the QImage class) Also, for this test I am decoding and displaying only the first 10 frames of the video file. Versions: Python - 2.7, Qt - 4.8.5 using Pyside

2
The fundamental problem with your example code, is that it does all the processing before the event-loop has started. You need to start the event-loop first, then call a slot that does the processing (you could use QTimer.singleShot for this, or just add a button to the ui). You may also want to consider using QApplication.processEvents.ekhumoro
hi ekhumoro, Thank you for your input. I am not sure whether I understand what you meant clearly. I have tried the changes as per edited code above, but that doesn't help. Only the first frame is painted, eventhough the paintEvent SLOT is called 10 times (for 10 frames of the video input).satheeshbabu
Maybe you should provide a link to the yuv file you are using, so that others can test your code.ekhumoro
I have kept a 10 frame stream test.yuv at dropbox.com/s/e42iyvv40q2zw2m/test.yuv?dl=0satheeshbabu
The resolution of test.yuv is 1920x1080satheeshbabu

2 Answers

0
votes

From the docs for array.fromfile():

Read n items (as machine values) from the file object f and append them to the end of the array. [emphasis added]

The example code does not include an offset into the array, and so the first frame is read over and over again. A simple fix would be to clear the array before reading the next frame:

    for k in range (0, 100):
        del yuv[:]
        yuv.fromfile(self.f_yuv, 3*self.width*self.height/2)

And note that, to see a difference, you will need to read at least sixty frames of the test file you linked to, because the first fifty or so are all the same (i.e. a plain green background).

0
votes

I have got this working based on some modifications (and extensions) to the program suggested in Displaying a video stream in QLabel with PySide. I have added a double buffer mechanism between processing and display, used an array to read in YUV file, and finally run the Yuv2Rgb conversion as a separate thread. This works for me - i.e., displays all frames in the file sequentially. Here is the program for any suggestions and improvements. Thanks for all your pointers so far! Please note that this is not running real-time!

#!/usr/bin/python

import sys
import time
from threading import Thread
from PySide.QtCore import *
from PySide.QtGui import *
from PIL import Image
import array
import struct
import numpy as np


class VideoDisplay(QLabel):
    def __init__(self):
        super(VideoDisplay, self).__init__()
        self.disp_counter = 0

    def updateFrame(self, image):
        self.disp_counter += 1
        self.setPixmap(QPixmap.fromImage(image))


class YuvVideoPlayer(QWidget):
    video_signal = Signal(QImage)
    video_display = None

    def __init__(self, f_yuv, width, height):
        super(YuvVideoPlayer, self).__init__()
        print "Setting up YuvVideoPlayer params"
        self.img = {}
        self.img[0] = QImage(width, height, QImage.Format_ARGB32)
        self.img[1] = QImage(width, height, QImage.Format_ARGB32)
        self.video_display = VideoDisplay()
        self.video_signal.connect(self.video_display.updateFrame)
        grid = QGridLayout()
        grid.setSpacing(10)
        grid.addWidget(self.video_display, 0, 0)
        self.setLayout(grid)
        self.setGeometry(0, 0, width, height)
        self.setMinimumSize(width, height)
        self.setMaximumSize(width, height)
        self.setWindowTitle('Control Center')
        print "Creating display thread"
        thYuv2Rgb = Thread(target=self.Yuv2Rgb, args=(f_yuv, width, height))
        print "Starting display thread"
        thYuv2Rgb.start()
        self.show()


    def Yuv2Rgb(self, f_yuv, width, height):
        '''This function gets called by an external thread'''
        try:
            yuv = array.array('B')
            pix = {}
            pix[0] = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.img[0].bits())
            pix[1] = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.img[1].bits())
            for i in range(0,height):
                for j in range(0, width):
                    pix[0][i, j] = 0
                    pix[1][i, j] = 0

            for k in range (0, 10):
                yuv.fromfile(f_yuv, 3*width*height/2)
                #y = yuv[0:width*height]
                for i in range(0, height):
                    for j in range(0, width):
                        Y_val = yuv[(i*width)+j]
                        U_val = yuv[width*height + ((i/2)*(width/2))+(j/2)]
                        V_val = yuv[width*height + width*height/4 + ((i/2)*(width/2))+(j/2)]

                        C = Y_val - 16
                        D = U_val - 128
                        E = V_val - 128
                        R = (( 298 * C           + 409 * E + 128) >> 8)
                        G = (( 298 * C - 100 * D - 208 * E + 128) >> 8)
                        B = (( 298 * C + 516 * D           + 128) >> 8)
                        if R > 255:
                            R = 255
                        if G > 255:
                            G = 255
                        if B > 255:
                            B = 255

                        pix[k % 2][i, j] = (255 << 24 | ((int(R) % 256 )<< 16) | ((int(G) % 256 ) << 8) | (int(B) % 256))
                self.video_signal.emit(self.img[k % 2])
                print "Complted pic num %r, disp_counter = %r" % (k, self.video_display.disp_counter)
                del yuv[:]


        except Exception, e:
            print(e)

if __name__ == "__main__":
    print "In Main"
    yuv_file_name = sys.argv[1]
    width = int(sys.argv[2])
    height = int(sys.argv[3])
    f_yuv = open(yuv_file_name, "rb")

    app = QApplication(sys.argv)
    print "Creating YuvVideoPlayer object"
    ex = YuvVideoPlayer(f_yuv, width, height)
    #ex.up_Video_callback(f_yuv, width, height)
    app.exec_()

    sys.exit(0)