0
votes

I have used OpenCVs stereocalibrate function to get the relative rotation and translation from one camera to another. What I'd like to do is change the origin of the world space and update the extrinsics of both cameras accordingly. I can easily do this with cameras that have a shared view with SolvePnP but I'd like to do this with cameras in which each is defined by it's pose relative to an adjacent camera where all of their fields don't overlap - like daisy chaining their relative poses.

I've determined the pose of the cameras relative to where I'd like the world origin and orientation to be using SolvePnP so that I know what the final extrinsics 'should be'. I've then tried combining the rotation matrices and translation vectors from the stereocalibration with the SolvePnP from the primary camera to get the same value both with ComposeRT and manually but to no avail.

Edit: So it turns out that for whatever reason the StereoCalibration and SolvePnP functions produce mirrored versions of the poses as the StereoCalibration appears to produce poses with a 180 degree rotation around the Y-axis. So by applying that rotation to the produced relative rotation matrix and translation vector everything works!

1

1 Answers

0
votes

If you know relative poses between all the cameras via a chain (relative poses between cameras a, b and b, c), you can combine the translations and rotations from camera a to c via b by

R ac = RabR bc

t ac = t ab + R abt bc

In other words, the new rotation from ac is rotating first from a to b and then from b to c. Translation is calculated in the same way, in addition the second translation vector has to be rotated by R ab.

Some amount of error is expected, depending how accurate your pairwise calibration is. Errors in camera pose accumulate over the chain. If you have camera poses making full circle you generally don't get the exact same poses for the starting/ending camera.