7
votes

I have a stereo-calibrated camera system calibrated using OpenCV and Python. I am trying to use it to calculate the 3D position of image points. I have collected the intrinsic and extrinsic matrices, as well as, the E, F, R, and T matrices. I am confused on how to triangulate the 2D image points to 3D object points. I have read the following post but I am confused on the process (In a calibrated stereo-vision rig, how does one obtain the "camera matrices" needed for implementing a 3D triangulation algorithm?). Can some one explain how to get from 2D to 3D? From reading around, I feel that the fundamental matrix (F) is important, but I haven't found a clear way to link it to the projection matrix (P). Can someone please walk me through this process?

I appreciate any help I can get.

1

1 Answers

14
votes

If you calibrated your stereo camera, you should have the intrinsics K1, K2 for each camera, and the rotation R12 and translation t12 from the first to the second camera. From these, you can form the camera projection matrices P1 and P2 as follows:

P1 = K1 * [I3 | 0]
P2 = K2 * [R12 | t12]

Here, I3 is the 3x3 identity matrix, and the notation [R | t] means stacking R and t horizontally.

Then, you can use function triangulatePoints (documentation), which implements the sparse stereo triangulation from the two camera matrices.

If you want dense triangulation or depthmap estimation, there are several functions for that. You first need to rectify the two images using stereoRectify (documentation) and then perform stereo matching, for example using StereoBM (documentation).