0
votes

I know that with cv2.createBackgroundSubtractorMOG2() we can substract the foreground mask using a background estimating method based on every 500 frames(default). But how about I already have a background picture and just want substract the foreground using that picture in each frame? What I'm tring is like this:

import numpy as np
import cv2

video = "xx.avi"
cap = cv2.VideoCapture(video)
bg = cv2.imread("bg.png")

while True:
    ret, frame = cap.read()
    original_frame = frame.copy()
    if ret:
        # get foremask?
        fgmask = frame - bg

        # filter kernel for denoising:
        kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))

        opening = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)

        closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)

        # Dilate to merge adjacent blobs
        dilation = cv2.dilate(closing, kernel, iterations = 2)

        # show fg:dilation
        cv2.imshow('fg mask', dilation)
        cv2.imshow('original', original_frame)
        k = cv2.waitKey(30) & 0xff
        if k == 27:
            cap.release()
            cv2.destroyAllWindows()
            break
    else:
        break

However I got colourful frames when doing frame = frame - bg. How could I get the correct foreground mask?

1
Just an idea. How about letting MOG learn your bg by feeding it to BackgroundSubtractorMOG2.apply, then using the same function to get the mask?dhanushka
Can you try converting converting to gray scale (just a suggestion as I havent seen the video)? and also try using cv2.absdiff() and update us the output?Lokesh Kumar

1 Answers

1
votes

You are getting colourful images because you are substracting 2 color images, so the colour you are getting on each pixel is the difference on each channel (B,G and R) between both images. In order to perform background subtraction, as dhanushka comments, the simplest option is to use MOG2 and forward it your background image for some (500) frames so it will learn this as the background. MOG2 is designed to learn the variability of each pixel colour with a Gaussian model, so if you are feeding always the same image, it will not learn this. Anyway, I think it should work for what you are intending to do. The nice thing about this approach is that MOG2 will take care of lots of more things like updating the model over time, dealing with shadows and so on.

Another option would be to implement your own background subtraction method as you tried to do. So, if you want to test it, you need to convert your fgmask colour image into something you can easily threshold and decide for each pixel if it is background or foreground. A simple option would be to convert it to grayscale, and then apply a simple threshold, the lower the threshold the more "sensitive" your subtraction method is, (play with the thresh value), i.e.:

...
# get foremask?
    fgmask = frame - bg

    gray_image = cv2.cvtColor(fgmask, cv2.COLOR_BGR2GRAY)
    thresh = 20
    im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]

    # filter kernel for denoising:
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))

    opening = cv2.morphologyEx(im_bw, cv2.MORPH_OPEN, kernel)
...