3
votes

I need to precisely align two images. To do that I am using Enhanced Correlation Coefficient (ECC). Which gives me great results except for images that are rotated a lot. For example if the Reference image (base image) and tested image (that I want to align) are rotated by 90 degrees ECC method doesn't work which is right according to the documentation of findTransformECC() which says

Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately).

So I have to use feature point based alignment method to do some rough alignment. I tried both SIFT and ORB and I am facing same problem with both. It works fine for some images and for others the resulting transformation is shifted or rotated on wrong side.

These are input images: Reference image Image to be aligned

I thought that the problem is caused by wrong matches but if I use just 10 keypoints with smaller distance it seems to me that all of them are good matches(I exactly the same result when I use 100 keypoints)

This is the result of matching: enter image description here

This is the result: Result

If you compare the rotated image it is shifted to the right and upside down. What am I missing?

This is my code:

        # Initiate detector
    orb = cv2.ORB_create()

    # find the keypoints with ORB
    kp_base = orb.detect(base_gray, None)
    kp_test = orb.detect(test_gray, None)

    # compute the descriptors with ORB
    kp_base, des_base = orb.compute(base_gray, kp_base)
    kp_test, des_test = orb.compute(test_gray, kp_test)

    # Debug print
    base_keypoints = cv2.drawKeypoints(base_gray, kp_base, color=(0, 0, 255), flags=0, outImage=base_gray)
    test_keypoints = cv2.drawKeypoints(test_gray, kp_test, color=(0, 0, 255), flags=0, outImage=test_gray)

    output.debug_show("Base image keypoints",base_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)
    output.debug_show("Test image keypoints",test_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)

    # find matches
    # create BFMatcher object
    bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
    # Match descriptors.
    matches = bf.match(des_base, des_test)
    # Sort them in the order of their distance.
    matches = sorted(matches, key=lambda x: x.distance)


    # Debug print - Draw first 10 matches.
    number_of_matches = 10
    matches_img = cv2.drawMatches(base_gray, kp_base, test_gray, kp_test, matches[:number_of_matches], flags=2, outImg=base_gray)
    output.debug_show("Matches", matches_img, debug_mode=debug_mode,fxy=fxy,waitkey=True)

    # calculate transformation matrix
    base_keypoints = np.float32([kp_base[m.queryIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
    test_keypoints = np.float32([kp_test[m.trainIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
    # Calculate Homography
    h, status = cv2.findHomography(base_keypoints, test_keypoints)
    # Warp source image to destination based on homography
    im_out = cv2.warpPerspective(test_gray, h, (base_gray.shape[1], base_gray.shape[0]))
    output.debug_show("After rotation", im_out, debug_mode=debug_mode, fxy=fxy)
1
Did you ever find an answer for this? I too have problems with the translation 'walking away' when I rotate. Normal translations of the image work just fine, but as soon as the image rotates the translation part of the homography mat (in my case estimateRigidTransform worked better) will not stay the same (which it should when rotating around the center).Scuba Kay
@ScubaKay Well, I cheated a little bit :D Because I know that after cropping the circuit board will be a rectangle that will be "standing"(shorter side of the rectangle will be parallel with the horizontal line ) or "laying" (longer side will be parallel with the horizontal line). So I just rotated one of the picture 4 times by 90°and compared both pictures after each rotation. Than I just simply picked a roated image that is most similar. Afterwards the ORB works like a champ !hory

1 Answers

0
votes

The answer to this problem is both mundane and irritating. Assuming this is the same issue as what I've encountered (I think it is):

Problem and Explanation Images are saved by most cameras with EXIF tags that include an "Orientation" value. Beginning with OpenCV 3.2, this orientation tag is automatically read-in when an image is loaded with cv.imread(), and the image is oriented based on the tag (there are 8 possible orientations, which include 90* rotations, mirroring and flipping). Some image viewing applications (such as Image Viewer in Linux Mint Cinnamon, and Adobe Photoshop) will display images rotated in the direction of the EXIF Orientation tag. Other applications (such as QGIS and OpenCV < 3.2) ignore the tag. If your Image 1 has an orientation tag, and Image 2 has an orientation tag, and you perform the alignment with ORB (I haven't tried SIFT for this) in OpenCV, your aligned Image 2 will appear with the correct orientation (that of Image 1) when opened in an application that reads the EXIF Orientation tag. However, if you open both images in an application that ignores the EXIF Orientation tag, then they will not appear to have the same orientation. This problem becomes even more pronounced when 1 image has an orientation tag and the other does not.

One Possible Solution Remove the EXIF Orientation tags prior to reading the images into OpenCV. Now, as of OpenCV 3.4 (maybe 3.3?) there is an option to load the images ignoring the tag, but when this is done, they are loaded as grayscale (1 channel), which is not helpful if you NEED color cv.imread('image.jpg',128) where 128 means "ignore orientation). So, I use pyexiv2 in python to remove the offending EXIF Orientation tag from my images:

import pyexiv2
image = path_to_image
imageMetadata = pyexiv2.ImageMetadata(image)
imageMetadata.read()
try:
    del imageMetadata['Exif.Image.Orientation']
    imageMetadata.write()
except:
    continue