0
votes

I'm currently working on an assignment, consisting of camera calibration, stereo-calibration and finally stereo matching. For this I am allowed to use the available OpenCV samples and tutorials and adapt them to our needs. While the first two parts weren't much of a problem, a Question about the stereo matching part:

We should create a colored point cloud in .ply format from the disparity map of two provided images. I'm using this code as a template: https://github.com/opencv/opencv/blob/master/samples/cpp/stereo_match.cpp I get the intrinsic and extrinsic files from the first two parts of the assignment.

My question is, how to get the corresponding color for each 3D-point from the original images and the disparity map?

I'm guessing that each coordinate of the disparity map corresponds to a pixel both input-images share. But how to get those pixelvalues?

EDIT: I know that the value of each element of the disparity map represents the disparity the corresponding pixel between the left and right image. But how do I get the corresponding pixels from the coordinates of the disparity map? Example: my disparity value at coordinates (x, y) is 128. 128 represents the depth. But how do I know which pixel in the original left or right image this corresponds to?

Additional Questions I'm having further questions about StereoSGBM and which parameters make sense. Here are my (downscaled for upload) input-images:

left:

enter image description here

right

enter image description here

Which give me these rectified images:

left

enter image description here

right

enter image description here

From this I get this disparity image:

enter image description here

For the disparity image: this is the best result I could achieve using blocksize=3 and numDisparities=512. However I'm not at all sure if those parameters make any sense. Are these values sensible?

2
Example: my disparity value at coordinates (x, y) is 128. 128 represents the depth. This is not correct. Depth is inversely proportional to disparity. You will have to do another set of calculations to obtain the depth from the disparity map obtained from opencv. The matching pixel locations you are looking for are usually not exposed in any APIs I have worked with. Do read up the semi global matching paper written by Hirshmueller to better understand how the matching actually happens.cplusplusrat
If depth is inversely proportianl to disparity, then yes, the disparity represents the depth, doesn't it. Anyway, the point of this sentence is to clarify that I'm not interested in the actual disparity value, but the coordinates and how they correspond to the pixels of the initial left and right images.Roland Deschain
Depth and disparity are distinct concepts and not interchangeable. Disparity for images of the same dimensions can be computed purely based on the images source. But depth calculation requires additional parameters such as distance between the two cameras and the focal length. And as I mentioned before, I do not believe most open source apis which do the disparity calculation for you, expose the matching pixel information.cplusplusrat
Ah ok, maybe I was on the wrong track here. As mentioned, the next step is to calculate the 3D Pointcloud. Maybe from those points I can map back to the correct pixel colorsRoland Deschain

2 Answers

1
votes

My question is, how to get the corresponding color for each 3D-point from the original images and the disparity map?

So a disparity map is nothing but distance between matching pixels in the epipolar plane in the left and right images. This means, you just need the pixel intensity to compute the disparity which in turn implies, you could do this computation on either just the grey-scale left-right image or any of the channels of the left-right images.

I am pretty sure the disparity image you are computing operates on grey-scale images obtained from the original rgb images. If you want to compute a color disparity image, you just need to extract the individual color channels of the left-right images, and compute the corresponding disparity map channel. The outcome will then be a 3 channel disparity map.

Additional Questions I'm having further questions about StereoSGBM and which parameters make sense. Here are my (downscaled for upload) input-images:

There is never a good answer to this for the most general case. You need a parameter tuner for this. See https://github.com/guimeira/stereo-tuner as an example. You should be able to write your own in open cv pretty easily if you want.

0
votes

Ok the solution to this problem is to use the projectpoint() function from OpenCV. Basically calculate 3D-Points from the disparity image and project them onto the 2D image and use the color you hit.