I have a complete stereo calibration with all results from (Python) OpenCV (i.e. necessary input and output of stereoRectify).
Visualization of stereo camera setup
My goal is to compute (for each of the two cameras) the camera center in world coordinates and the positions of arbitrary image coordinates (in pixels) in the world coordinate system after stereo rectification. I later want to determine the intersection of these perspective points (the rays going from the camera center through the image coordinates in the world coordinate system) with a plane in 3d that I computed in the world coordinate system.
For the unrectified camera, I can just use inverse translation and inverse rotation to transform points from the coordinate system of the right camera to the left camera (which I consider the world coordinate system). The transformation from 2d image coordinates to 3d in the camera coordinate system can be obtained by using the camera matrix.
However, after the rectification both cameras are rotated towards each other (to make them coplanar) and they are horizontally aligned using R_rect
(both steps together are summarized in R1
and R2
). Further, the camera matrix changes and we have new projection matrices P1
and P2
. I am having trouble to revert these transformations.
Example:
I have a point [u, v]
in the rectified image of the right camera. I can transform this point into 3d (in the coordinate system of the rectified right camera) using the projection matrix P2
. After this I obtain a point [X, Y, Z]
in the camera coordinate system. How do I get the position of this point in the world coordinate system (i.e. the one from the unrectified left camera)?