I have two images I1 and I2 of an object in a 3D scene. The difference between the images is that the object has moved. The camera position and calibration is known, and a few 3D feature points on the object are known as well as the 3D transform that moved it. I also have the projection of those feature points.
I want to align the images. So it seems like I have a couple options:
I can just look at the 2D feature points and derive an affine transform to do the alignment. Intuitively, this seems like it will have errors because it will not account for perspective distortion.
I can find the homography transform and use warpPerspective to do the transform. I'm new to homography transform, but it sounds like this will take perspective distortion into account. In fact, with my setup, I believe the homography matrix is simply: inverting the projection matrix, inverting the 3D transform, and then reprojecting. This will give x' = Hx. This seems like it will give me the exact image alignment.
So first question: warpPerspective will give better alignment results that warpAffine?
Second question: Not all the feature points lie on the same plane. Can I still use warpPerspective? I think I read for homography transform the points have to be on the same plane.
Third question: Since the homography transformation is 3x3, that means I need to know the z-coordinate for every pixel in the image in order to do the transform?
Thanks.