3
votes

After calibration and rectification, I was able to extract a pretty accurate disparity map of the camera feed of my stereo camera. The map contains correct disparity values since when I transform them in metrical distances they are really accurate. Now my disparity map looks, if untouched, like this:

Rectified Image on the left, disparity Map on the right, Below the correct estimation of my forehead (~0.5 meters, white dot on the disparity map)

Rectified Image on the left, disparity Map on the right, Below the correct estimation of my forehead (~0.5 meters, white dot on the disparity map)

with SGBM values set to:

minDisparities -> 0;
numDisparities -> 32;
P1 -> 0;
P2 -> 0;
block Size -> 0;
Speckle window range -> 0;
Speckle window Size -> 0;
disp12MaxDiff -> 0;
preFilterCap -> 44;
uniquenessRatio -> 0;
mode ->2 (MODE_SGBM_3WAY);

But I can easily change all of them through a set of trackbars.

As you can see, the disparity map looks grainy and many non-textured areas have black pixels whose disparity value cannot be detected. Moreover, details and edges are not fine which is not acceptable for my final application. I tried to look for filters, and found that a very common one is the Weighted Least Square filter (WLS), which I applied, and these were the initial bad results:

Upper right disparity map, Upper left WLS filtered disp map, bottom right confidence map

Upper right disparity map, Upper left WLS filtered disp map, bottom right confidence map. As you can see, results are bad in the filtered WLS map and in fact the confidence map is mostly black (and depth values completely wrong).

By playing with the SGBM parameters I get:

Upper left color map (easier to see the depth perspective), upper right WLS disp, lower left confidence map, lower right unfiltered disp map

Upper left color map (easier to see the depth perspective), upper right WLS disp, lower left confidence map, lower right unfiltered disp map.

High confidence areas where the confidence map is white are correctly filtered (you can see that from both the color map and the WLS filtered image), and their depth information is comparable to the unfiltered disp map.

My problem is that no matter what I try, I am not able to get a high confidence for closer objects, like my figure on the above images. I tried everything.

So in conclusion my question is: is there a way to get a smooth, clean and temporally stable disparity map for the entire field of view (similar to what I am getting for the wall and hallway behind me)? Should I stick to the WLS filtering or use some other filters? In that case, what do you suggest?

I am using OpenCV and Visual Studio. Any advice is greatly appreciated.

Thanks!!

1
I am afraid this is offtopic for SO.Slava
How is this off topic?Marco Beccarini
SO is about specific programming problems especially when you tag it with C++. This is just my opinion, that downvote is not mine though.Slava

1 Answers

1
votes

For those having similar problems: in my case I realized I was passing the wrong images to the filter function.

wls_filter->filter(dispL, recl, dispFiltered, dispR, Rect(), recr);

where dispL and dispR were the right and left disparity map AFTER this normalization:

double minValL, maxValL;
minMaxLoc(disp16sL, &minValL, &maxValL);
disp16sL.convertTo(dispL, CV_8UC1, 255 / (maxValL - minValL));

(same for right disp map)

Instead, by having:

    wls_filter->filter(disp16sL, recl, dispFiltered, disp16sR, Rect(), recr);

where disp16sL and disp16sR are the disparity maps before normalization, and THEN normalizing the filtered disparity map obtained, gave me much better results, with a confidence map almost completely white.