I am trying to get a 3D point cloud from images of my stereo camera. I calibrated my stereo camera (240x320 each) with OpenCV in Python and got a reprojection error of 0.25. I used a chessboard pattern with 9x10 rows and columns and took 10 images in different angles at different lighting conditions and moved the camera while doing it. (I'm not sure, maybe that's the problem) Then I imported them to matlab and matlab gave me the chessboard corners. (I chose matlab because it was much more precise than opencv) Then I loaded the corners into my python calibration script.
The intrinsic calibration is flawless:
But I'm not sure whether I can use the results of the extrinsic calibration, because the image is still circular and has those black spaces; the rectangular centers of the images are good:
The parameters and matrices are saved and loaded in another script. That script runs a sobel operator through a left and a right image and finds the matching edge pixels in both images:
Then I'm handing those matching pixels over to cv.triangulatePoints(projMatr1, projMatr2, projPoints1, projPoints2) which should give me 3D coordinates of the points. I plot them with plotly but the result is not recognizable and is nowhere near the edges of the shown cup :
Does anyone have an idea, what it could be or what I'm doing wrong? I can post my code here, if needed.