0
votes

i am using the following code in the descriptor_extractor_matcher.cpp sample to compute the descriptors of img1 (Mat descriptors01), write it to my disk and load it back (Mat descriptors1). (same steps for the keypoints, but code is rather much the same ...)

    Ptr<DescriptorExtractor> descriptorExtractor = DescriptorExtractor::create( argv[2] );

...

Mat descriptors01;
descriptorExtractor->compute( img1, keypoints1, descriptors01 ); // compute descriptors

FileStorage storage("test.yml", FileStorage::WRITE);             //save it to disc
storage << "blub" << descriptors01;
storage.release();

Mat descriptors1;
FileStorage storage1("test.yml", FileStorage::READ);            // load it again
storage1["blub"] >> descriptors1;
storage1.release();

The keypoints & descriptors for image 2 are computed and used without saving and loading.

I am using only the loaded data (keypoints & descriptors) for image 1 for the matching, so for the descriptors: descriptors1.

Now here is the thing: if I compare the cases
A) Using the code above for computing, storing and loading;
B) Using only loaded data (without computing and store it again)

for the matching I get different results, as you can see in the pictures for keypoints aswell as for the matching descriptors. I would have expect no differences... What am I missing here? Must I compare 2 images, and cannot compare an image to a stored set of keypoints and it's descriptors ?

Of course I'm using the same values for [detectorType] [descriptorType] [matcherType] [matcherFilterType] [image1] [image2] [ransacReprojThreshold], by the way ;)

Thanks alot!

UPDATE:

It seems the issue is depending on the descriptor. Working with loaded descriptors works for SIFT and SURF, but not for ORB and other. Images: Results with different descriptors for case A and B:

enter image description here

1
Intuitively I see 3 possibilites: 1: your loading and saving doesnt work. 2: your saving and loading do work, but you lose some of your float precision so your descriptors vary. 3: matcher might use some RANSAC which isnt deterministic, so results differ. My favorite is #2, and my advise is to save/load keypoints and compute descriptors again. if that works, there are again two possibilies: A: loss of precision or sth like that. B: keypoints get wrong descriptors (from another keypoint).Micka
Hi there! It seems the setting is okey, as it works for some descriptors (at least for SURF). But if I go to BRIEF, ORB or other descriptors there is a difference ...alti

1 Answers

0
votes

Try repeating A or B individually and see if the results are coming out to be the same. I suspect they won't and I say that because, #1 Your object of interest has poor texture and that would result in poor descriptors. #2 The viewpoint change between the two images is huge and which leads to the problem of non-repeatability even for the best of the descriptors like SIFT.

Now, comes the part of how to solve this repeatability issue, #1 use some threshold on the norm of the descriptor so that only very strong features are used for matching. #2 use the epipolar constraint along with RANSAC to filter out wrong matches. I am attaching two images to show how the filter hugely affects the correspondences. enter image description here Using SURF to find correspondence between the two images (two images in red-cyan colormap) enter image description here After filtering the images using RANSAC using epipolar constraint.

Feel free to comment and discuss further over this issue. :-)