I need to precisely align two images. To do that I am using Enhanced Correlation Coefficient (ECC). Which gives me great results except for images that are rotated a lot. For example if the Reference image (base image) and tested image (that I want to align) are rotated by 90 degrees ECC method doesn't work which is right according to the documentation of findTransformECC() which says
Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately).
So I have to use feature point based alignment method to do some rough alignment. I tried both SIFT and ORB and I am facing same problem with both. It works fine for some images and for others the resulting transformation is shifted or rotated on wrong side.
I thought that the problem is caused by wrong matches but if I use just 10 keypoints with smaller distance it seems to me that all of them are good matches(I exactly the same result when I use 100 keypoints)
This is the result of matching:
If you compare the rotated image it is shifted to the right and upside down. What am I missing?
This is my code:
# Initiate detector
orb = cv2.ORB_create()
# find the keypoints with ORB
kp_base = orb.detect(base_gray, None)
kp_test = orb.detect(test_gray, None)
# compute the descriptors with ORB
kp_base, des_base = orb.compute(base_gray, kp_base)
kp_test, des_test = orb.compute(test_gray, kp_test)
# Debug print
base_keypoints = cv2.drawKeypoints(base_gray, kp_base, color=(0, 0, 255), flags=0, outImage=base_gray)
test_keypoints = cv2.drawKeypoints(test_gray, kp_test, color=(0, 0, 255), flags=0, outImage=test_gray)
output.debug_show("Base image keypoints",base_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)
output.debug_show("Test image keypoints",test_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)
# find matches
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des_base, des_test)
# Sort them in the order of their distance.
matches = sorted(matches, key=lambda x: x.distance)
# Debug print - Draw first 10 matches.
number_of_matches = 10
matches_img = cv2.drawMatches(base_gray, kp_base, test_gray, kp_test, matches[:number_of_matches], flags=2, outImg=base_gray)
output.debug_show("Matches", matches_img, debug_mode=debug_mode,fxy=fxy,waitkey=True)
# calculate transformation matrix
base_keypoints = np.float32([kp_base[m.queryIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
test_keypoints = np.float32([kp_test[m.trainIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
# Calculate Homography
h, status = cv2.findHomography(base_keypoints, test_keypoints)
# Warp source image to destination based on homography
im_out = cv2.warpPerspective(test_gray, h, (base_gray.shape[1], base_gray.shape[0]))
output.debug_show("After rotation", im_out, debug_mode=debug_mode, fxy=fxy)