9
votes

I'm generating a MJPEG stream using Flask and flask-restful. For reasons, I want to catch this stream in another Python program, for which I use OpenCV(3). Problem is that the first frame that is requested comes in well. On the other hand, the second frame that is requested (after a delay) is not received properly, and throws the error:

[mpjpeg @ 0000017a86f524a0] Expected boundary '--' not found, instead found a line of 82 bytes

Multiple times.

I believe this happens because the boundary of a frame is set manually. I will put the offending code below.

MJPEG Stream generation:

## Controller for the streaming of content.
class StreamContent(Resource):
    @classmethod
    def setVis(self, vis):
        self.savedVis = vis

    def get(self):
        return Response(gen(VideoCamera(self.savedVis)),
                        mimetype='multipart/x-mixed-replace; boundary=frame')


## Generate a new VideoCamera and stream the retrieved frames.    
def gen(camera):
    frame = camera.getFrame()
    while frame != None:
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
        time.sleep(0.07)
        frame = camera.getFrame()

## Allows for the reading of video frames.
class VideoCamera(object):
    def __init__(self, vis):
        #object we retrieve the frame from.
        self.vis = vis

    ## Get the current frame.
    def getFrame(self):
        image = self.vis.mat_frame_with_overlay
        # We are using Motion JPEG, but OpenCV defaults to capture raw images,
        # so we must encode it into JPEG in order to correctly display the
        # video/image stream.
        ret, jpeg = cv2.imencode('.jpg', image)
        return jpeg.tobytes()

MJPEG Stream retrieval:

"""
Get a single frame from the camera.
"""        
class Image(Resource):
    def get(self):
        camera = VideoCamera()
        return Response(camera.getSingleFrame(), mimetype='image/jpeg')

"""
Contains methods for retrieving video information from a source.
"""
class VideoCamera(object):
    def __del__(self):
        self.video.release()

    @classmethod
    def setVideo(self, video):
        self.video = video

    ## Get the current frame.
    def getSingleFrame(self):
        self.startVideoFromSource(self.video)
        ret, image = self.video.read()
        time.sleep(0.5)
        ret, image = self.video.read()
        # We are using Motion JPEG, but OpenCV defaults to capture raw images,
        # so we must encode it into JPEG in order to correctly display the
        # video/image stream.
        ret, jpeg = cv2.imencode('.jpg', image)
        self.stopVideo()
        return jpeg.tobytes()

    def stopVideo(self):
        self.video.release()
5
Did you ever figure out a way to get this to work?anaBad
@anaBad I have added an answer as it got too lengthy for a comment, I hope this helps you out!Arastelion

5 Answers

7
votes

Changing the frame generator worked for me:

yield (b'--frame\r\n'
       b'Content-Type:image/jpeg\r\n'
       b'Content-Length: ' + f"{len(frame)}".encode() + b'\r\n'
       b'\r\n' + frame + b'\r\n')
1
votes

For Anabad (And others):

Oof, it's been a while for this problem. If I remember correctly, the short answer is: NO, I was never able to get this to work properly.

The camera is accessed by multiple programs this way at the same time (when a request is send to the API more than once, multiple threads start reading the camera) which the camera cannot handle. The best way to handle this properly (in my opinion), is to read the camera in a separate class on it's own thread, and use a observer pattern for the API. Every time a new request comes in from a client to read the camera, an observer will send the new frames once they become available.

This solves the problem of the camera being accessed by multiple class instances/threads, which is the reason why this would not work. Get around this problem and it should work fine.

1
votes

Maybe it's too late to answer, but I got the same issue and found a solution.

The error [mpjpeg @ 0000017a86f524a0] Expected boundary '--' not found, instead found a line of 82 bytes is error message from ffmpeg which OpenCV uses as a mjpeg image decoder as a backend.

It means images are streamed as mpjpeg (= multipart jpeg data), but the boundary that separates each jpeg image is not found (so the decoder cannot decode the image).

The boundary should starts with --, however the server written in the question declares that the boundary is just frame here: mimetype='multipart/x-mixed-replace; boundary=frame') This part should be like mimetype='multipart/x-mixed-replace; boundary=--frame')

Also I found that line separation between boundary and image data is mandatory. (since ffmpeg provided by Ubuntu 18.04 and later?) Please see another implementation of mjpg server. ( https://github.com/ros-drivers/video_stream_opencv/blob/e6ab82a88dca53172dc2d892cd3decd98660c447/test/mjpg_server.py#L75 )

Hope it will help.

1
votes

I'm new to this but it's definitely a multithreading issue, as it only happens sometimes when I reload the page. Easy fix:

camera.py

import cv2, threading

lock = threading.Lock()
camIpLink = 'http://user:[email protected]/with/video/footage'
cap = cv2.VideoCapture(camIpLink)

def getFrame():
    global cap
    with lock:
        while True:
            try:
                return bytes(cv2.imencode('.jpg', cap.read()[1])[1])
            except:
                print("Frame exception")
                cap.release()
                cap = cv2.VideoCapture(camIpLink)

server.py

from camera import getFrame
def gen():
    while True:
        frame = getFrame()
        try:
            yield (b'--frame\r\n'
                    b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
        except:
            print("Yield Exception")
            return

@app.route("/cam", methods=['GET']) # todo authentication
def camVideoFootage():
    return Response(gen(),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

I did some error handling through trial and error. Hope that helps!

1
votes

I know this is a bit specific, but I was getting that error with a Mobotix camera and had to pass an additional parameter in the stream URL needlength to ask it to send me the image boundaries.

With that I was able to read with OpenCV without the boundary error.

This parameter wasn't documented anywhere but on the help page on the camera at:

http://camera_url/cgi-bin/faststream.jpg?help

And it says:

needlength
Need Content-Length
Send HTTP content-length for every frame in server push stream.
Note: This option is not useful for browsers.

So I had to modify the stream URL to look like:

http://camera_url/control/faststream.jpg?stream=full&needlength

My guess is that other cases may have a similar cause, and OpenCV is not finding the expected image boundary markers.