I am trying to match features between pairs of images taken with an endoscopic camera. I see very poor performance in the number of features that match when the image is translated (even though the overlap is still quite high).
A couple of questions
- Might this low number of features matching come from vignetting that is present in the images? (SIFT descriptors describe gradients and if there is a constant vignette gradient, does this corrupt the descriptors?)
- Could the camera calibration be poor?
- Do you have any additional suggestions for improving the matching?
Here's what I am doing: - Images are remapped based on camera calibration done with a checkerboard pattern - Features are detected with SIFT (VLFeat) - Features are matched with a geometric verification step (RANSAC with fairly high threshold)
Here are two examples: (red = features found by not matched; green = features that matched after geometric verification) Small translation = reasonable matching
Large translation = poor matching