4
votes

I am doing stereo calibration of two cameras (let's name them L and R) with opencv. I use 20 pairs of checkerboard images and compute the transformation of R with respect to L. What I want to do is use a new pair of images, compute the 2d checkerboard corners in image L, transform those points according to my calibration and draw the corresponding transformed points on image R with the hope that they will match the corners of the checkerboard in that image.

I tried the naive way of transforming the 2d points from [x,y] to [x,y,1], multiply by the 3x3 rotation matrix, add the rotation vector and then divide by z, but the result is wrong, so I guess it's not that simple (?)

Edit (to clarify some things):

The reason I want to do this is basically because I want to validate the stereo calibration on a new pair of images. So, I don't actually want to get a new 2d transformation between the two images, I want to check if the 3d transformation I have found is correct.

This is my setup:

setup

I have the rotation and translation relating the two cameras (E), but I don't have rotations and translations of the object in relation to each camera (E_R, E_L).

Ideally what I would like to do:

  1. Choose the 2d corners in image from camera L (in pixels e.g. [100,200] etc).
  2. Do some kind of transformation on the 2d points based on matrix E that I have found.
  3. Get the corresponding 2d points in image from camera R, draw them, and hopefully they match the actual corners!

The more I think about it though, the more I am convinced that this is wrong/can't be done.

What I am probably trying now:

  1. Using the intrinsic parameters of the cameras (let's say I_R and I_L), solve 2 least squares systems to find E_R and E_L
  2. Choose 2d corners in image from camera L.
  3. Project those corners to their corresponding 3d points (3d_points_L).
  4. Do: 3d_points_R = (E_L).inverse * E * E_R * 3d_points_L
  5. Get the 2d_points_R from 3d_points_R and draw them.

I will update when I have something new

1
Are you saying that you have the rotation and translation matrices relating the two images and you want to combine them? You need to multiply them, not add them. final = [T]*[R]. Also are your coordinates 3 dimensional? When you say divide by z do you mean by z or by the third element in homogeneous coordinates? If you really are using 3 dimensional transforms to map between images you probably also have a camera matrix and you will need to multiply by it to get the coordinates in the image.Hammer

1 Answers

2
votes

It is actually easy to do that but what you're making several mistakes. Remember after stereo calibration R and L relate the position and orientation of the second camera to the first camera in the first camera's 3D coordinate system. And also remember to find the 3D position of a point by a pair of cameras you need to triangulate the position. By setting the z component to 1 you're making two mistakes. First, most likely you have used the common OpenCV stereo calibration code and have given the distance between the corners of the checker board in cm. Hence, z=1 means 1 cm away from the center of camera, that's super close to the camera. Second, by setting the same z for all the points you are saying the checker board is perpendicular to the principal axis (aka optical axis, or principal ray), while most likely in your image that's not the case. So you're transforming some virtual 3D points first to the second camera's coordinate system and then projecting them onto the image plane.

If you want to transform just planar points then you can find the homography between the two cameras (OpenCV has the function) and use that.