I am working on a tracking algorithm and one of the earliest steps it does is background subtraction. The algorithm gets a series of frames that represent the video with a moving object and static background. The object is in every frame.
In my first version of this process I computed a median image from all the frames and got a very good background scene approximation. Then I subtracted the resulting image from every frame in video sequence to get foreground (moving objects).
The above method worked well, but then I tried to replace it by using OpenCV's background subtractors MOG and MOG2.
What I do not understand is how these two classes perform the "precomputation of the background model"? As far as I understood from dozens of tutorials and documentations, these subtractors update the background model every time I use the apply() method and return a foreground mask.
But this means thet the first result of the apply() method will be a blank mask. And the later images wil have initial object's position ghost in it (see example below):
What am I missing? I googled a lot and seem to be the only one with this problem... Is there a way to run background precomputation that I am not aware of?
EDIT: I found a "trick" to do it: Before using OpenCV's MOG or MOG2 I first compute median background image, then I use it in first apply() call. The following apply() calls produce the foreground mask without the initial position ghost.
But still, is this how it should be done or is there a better way?