Knowing the rotations and translations of two cameras in world coordinates (relative to some known point), how would I calibrate my stereo system?
In OpenCV the normal approach is to use a calibration pattern in front of both cameras to get point correspondences. These points are used in stereoCalibrate which calculates the rotation matrix R and translation vector T (and the fundamental matrix F). In the next step the stereo rectification can be done to row-align images of both cameras with stereoRectify. stereoRectify needs R and T to calculate the homographies for the perspective transform of the images and also calculates the Q-matrix for translating disparity to depth.
Giving the situation that R and T in the world coordinate system are already known (known is the rotation around the z-Axis (floor-ceiling or yaw angle in aeronomy) and the rotation around the axis perpendicular to the camera view (pitch angle)), in which coordinate system should they be given to stereoRectify? What I mean with that is that there is the coordinate system of Camera1, of Camera2, and the (or one) world coordinate system.
The computation of the essential matrix E can be done with R * S where S is the skew-symmetric matrix of T and the fundamental matrix F with M_r.inv().t() * E * M_l.inv() following LearningOpenCV 3 from Kaehler and Bradski (M_r and M_l are the camera intrinsics of the right and left camera respectively). Here the question on R and T is the same. Is it the rotation from one camera to the other in world coordinates or e.g. in the coordinate system of one camera?
A sketch of the involved coordinate systems can be found here:
How is the camera coordinate system in OpenCV oriented?, however it is still unclear for me how exactly R and T should be calculated.