2
votes

I have two images with known corresponding 2D points, the intrinsic parameters of the cameras and the 3D transformation between the cameras. I want to calculate the 2D reprojection error from one image to the other.

To do so, I thought about getting a fundamental matrix from the transformation, so I can compute the point-to-line distance between the points and the corresponding epipolar lines. How can I get the fundamental matrix?

I know that E = R * [t] and F = K^(-t) * E * K^(-1), where E is the essential matrix and [t] is the skew-symmetric matrix of the translation vector. However, this returns a null matrix if the motion is pure rotation (t = [0 0 0]). I know that in this case a homography explains the motion better than the fundamental matrix, so that I can compare the norm of the translation vector with a small threshold to choose a fundamental matrix or a homogaphy. Is there a better way of doing this?

1
If both rotation and translation are not null, the homography doesn't models the motion correctly. - ChronoTrigger
Sorry, I don't have any, but this is aimed at pictures of rooms. The structure shouldn't be important, though. - ChronoTrigger
Oh! sorry I did not read your question correctly.. I deleted my comment - Humam Helfawi

1 Answers

1
votes

"I want to calculate the 2D reprojection error from one image to the other."

Then go and calculate it. Your setup is calibrated, so you don't need anything other than a known piece of 3D geometry. Forget about the epipolar error, which may as well be undefined if your camera motion is (close to) a pure rotation.

Take an object of known size and shape (for example, a checkerboard), work out its location in 3D space from one camera view (for a checkerboard you can fit a homography between its physical model and its projection, then decompose it into [R|t]). Then project the now-located 3D shape into the other camera given that camera's calibrated parameters, and compare the projection with the object's actual image.