2
votes

I have a background image of a plain surface. My goals is to track objects that are positioned/moved over the surface.

I'm using MOG2 to find foreground objects with a learning rate of 0, so the background is not updated (otherwise a static object would be incorporated in the background).

The result is fine, but I have a huge problem with light: if the lighting changing after background acquiring, various artifacts are detected as foregorund objects.

How can I improve the robustness against lighting?


Update

I'm experimenting with a solution that works quite well, but it need some fixes.

I'm using MOG2 in this manner:

  1. Acquiring and learning background using the first frames (BGK)
  2. Apply MOG2 to current frame with learning rate of 0 (no update) and get foreground mask (FG_MASK)
  3. For the next frames I'm using FG_MASK to mask BGK and I'm using the result to Apply to MOG2 with some learning rate (this update the background).
  4. After that I'm updating BGK taking it from MOG2 algorythm.

In this way, objects are masked out of the background, and the background still updating. This can guarantee a good robustness against light changes.

Foreground detection with adaptive background

There is some drawback, for example when the light is changing, the object mask ("mask blob") keep with the previous brightness, and if the difference is too high can be detected as new object.

Drawbacks

In the above image you can see that the current frame is brighter and the mask for the static object is darker.

My idea is try to adapt the "mask blob" changing it's brightness following the light changing. How can i get this with OpenCV?


Fix for previous drawbacks

Using inpaint function instaead to simply mask the BGK (step 3) i can keep the "mask blobs" sync with background brightness changes. This fix has drawback too, it's not very perfomance.


Update 2

I think this is an interesting topic so I keep it updated. The inpaint function is very slow, so I'm trying another way. Using the Hsv color space allows you to manage the brightness channel, I can reduce the impact of brightness in this way:

  1. obtain the V channel with the Split function
  2. calculate the mean value of channel V
  3. apply a threshold truncate to the V channel using the mean value
  4. Rebuild frame using new V channel
1

1 Answers

0
votes

I had similar problem implementing speed estimation algorithm, I hope that my solution may help you.

One of the methods I tried was Accumulative Difference Image (basically what you did with MOG2), but it failed to track stationary objects when the background updated. When I did not update the background I had the same problem as you did.

So, I decided to use RGB/HSV thresholding. I set the boundaries for color of the road (let us say gray), and created binary image, where everything of a color of the road was black (0), everything else was white (1). Here is a nice tutorial on HSV threshold. When choosing boundaries you can acknowledge the lighting factor setting let us say upper boundary for bright lighting and lower for dark. However, this method may cause object of a color similar to background be not seen by the algorithm. Another shortcoming is the the background should be uniform, without any details.

Another method you can try is to convert both input image and background to grayscale and then subtract manually. This would give you an opportunity to tweak the threshold level for difference from the background. Let us say background of a value 120 in dark condition will have 140 in bright condition, so difference is 20. For object pixel may have let us say value of 180 and background value is 120, so difference is 60. Set threshold for difference 20 and set values below 20 to 0 and values above 20 to 1, this should do the thing (all values are on the scale from 0 to 255).

Good luck!