10
votes

I copied the code of the Feature Matching with FLANN from the OpenCV tutorial page, and made the following changes:

  • I used the SIFT features, instead of SURF;
  • I modified the check for a 'good match'. Instead of

    if( matches[i].distance < 2*min_dist )
    

I used

    if( matches[i].distance <= 2*min_dist )

otherwise I would get zero good matches when comparing an image with itself.

  • Modified parameter in drawing the keypoints:

    drawMatches( img1, k1, img2, k2,
                     good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                     vector<char>(), DrawMatchesFlags::DEFAULT);
    

I extracted the SIFT from all the images in the folder Ireland of the INRIA-Holidays dataset. Then I compared each image to all the others and draw the matches.

However there is a strange problem I have never experienced with any other SIFT/Matcher implementation I used in the past:

  • the matches for an image I matched against itself are good. Each keypoint is mapped onto itself except for some. See image above. Image matched against itselft
  • When I match I against another image J (with J not equal to I), many points are mapped onto the same one. Some examples are below. MatchesMatchesMatches

Is there anyone who used the same code from the OpenCV tutorial and can report a different experience from mine?

1
Search for "ratio test sift"dynamic
I am aware of the ratio test, but I cannot see how it would solve this problem (closest point at distance=0).Antonio Sesto
i'm also new for opencv. and wonder if this reason is different detect\descript\match algorithm suits different kinds of picture.Ethan Wang

1 Answers

1
votes

Checkout the matcher_simple.cpp example. It uses a brute force matcher that seems to work pretty well. Here is the code:

// detecting keypoints
SurfFeatureDetector detector(400);
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);

// computing descriptors
SurfDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);

// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);

// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);