2
votes

I'm able to get the projection matrix out of a monocular setup after using calibrateCamera.

This stackoverflow answer explains how.

Now, I follow the stereo calibration sample and I would like to do the same for both cameras after I do stereo rectification (openCV - stereo rectify). The method gives me Q, R1, R2, P1 and P2 matrices.

void stereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, Size imageSize, InputArray R, InputArray T, OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, int flags=CALIB_ZERO_DISPARITY, double alpha=-1, Size newImageSize=Size(), Rect* validPixROI1=0, Rect* validPixROI2=0 )

I assume I have to combine them somehow, but I don't understand how to relate these output matrices to intrinsics and extrinsics of a camera.

Thanks in advance!

EDIT: Let's assume my cameras don't have distortion. I understand I can get remap the images using initUndistorRectifyMap and remap. But, I'm just interested in writing some of my own code by using the projection matrix, i.e. if it's just a single camera calibration, I get the camera matrix C, and the rotation and translation vector, I combine them to the projection matrix by C * [R |t]. I'd like to do the same but for the rectified camera position.

2

2 Answers

3
votes

What kind of projection matrix do you need?

The stereoRectify only computes the the rotation matrices for each camera that transforms both image plane onto a common image plane. This makes all the epipolar lines parallel and thus you have the find point correspondences per raster lines. I.e. you have a 2D point X1 = (x1, y1) on the image plane of camera #1 then the corresponding point on camera #2 will be located over the raster line with the same y1 component. So the search is simplified to one dimension.

If you are interested in computing the joint undistortion and rectification transformation the you should use the output of stereoRectify as the input of initUndistortRectifyMap and then remap to apply the projection. I.e.:

stereoRectify( M1, D1, M2, D2, img_size, R, T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, -1, img_size, &roi1, &roi2 );

Mat map11, map12, map21, map22;
initUndistortRectifyMap(M1, D1, R1, P1, img_size, CV_16SC2, map11, map12);
initUndistortRectifyMap(M2, D2, R2, P2, img_size, CV_16SC2, map21, map22);

Mat img1r, img2r;
remap(img1, img1r, map11, map12, INTER_LINEAR);
remap(img2, img2r, map21, map22, INTER_LINEAR);

Update #1:

Say you have a point in the world coordinate system: P_W. It can be transformed into the camera coordinate system by multiplying it with the extrinsic parameters, i.e. P_C = R*P_W + T or P_C = [R|T] * P_W.

After the rectification you will have two matrices for each cameras:

  • A rotation matrix for each camera (R1, R2) that makes both camera image planes the same plane, and
  • A projection matrix in the new (rectified) coordinate system for each camera (P1, P2), as you can see, the first three columns of P1 and P2 will effectively be the new rectified camera matrices.

The transformation of points into the rectified camera coordinate system can be done by a simple matrix multiplication: P_R = R1*P_C.
And the transformation into the rectified image plane is similar as above: p_R = P1*R1*P_C

1
votes

The answer here is somewhat obvious, although it wasn't so at the time. The projection matrices I'm looking for was P1 and P2. I was wondering how to construct them with distortion parameters. Actually, this is not necessary, because the whole remap process undistorts the images, so that we can straight away make use P1 and P2 as the projection. Hope this helps someone.