1
votes

I am trying to get a 3D point cloud from images of my stereo camera. I calibrated my stereo camera (240x320 each) with OpenCV in Python and got a reprojection error of 0.25. I used a chessboard pattern with 9x10 rows and columns and took 10 images in different angles at different lighting conditions and moved the camera while doing it. (I'm not sure, maybe that's the problem) Then I imported them to matlab and matlab gave me the chessboard corners. (I chose matlab because it was much more precise than opencv) Then I loaded the corners into my python calibration script.

The intrinsic calibration is flawless:

stereo images after intrinsic calibration and undistortion

But I'm not sure whether I can use the results of the extrinsic calibration, because the image is still circular and has those black spaces; the rectangular centers of the images are good:

stereo images after extrinsic calibration

The parameters and matrices are saved and loaded in another script. That script runs a sobel operator through a left and a right image and finds the matching edge pixels in both images:

matched pixels

Then I'm handing those matching pixels over to cv.triangulatePoints(projMatr1, projMatr2, projPoints1, projPoints2) which should give me 3D coordinates of the points. I plot them with plotly but the result is not recognizable and is nowhere near the edges of the shown cup :

3D plot

Does anyone have an idea, what it could be or what I'm doing wrong? I can post my code here, if needed.

1

1 Answers

1
votes

Ok I tried exchanging the row and column elements of the matched points from [row, col] => [col, row] and now the triangulation is working:

result after changing row and column

There are still false matchings but thats ok, I expected them. But changing the rows and columns doesnt work with the images:

matches after changing row and column

but thats a different story and I'm satisfied with the current result.