2
votes

Actually, I'm trying to match a list of key points extracted from an image to another list of key points extracted from another image. I tried SURF/SIFT to detect the key points but the results were not as expected in terms of accuracy of the keypoints detected from each image. I thought about not using key point detector and just use the points of the connected regions then compute the descriptors of these points using SIFT/SUFT but most of times calling the compute method will empty the keypoint list.

Sample of code below:

int minHessian = 100;
 SurfFeatureDetector detector(minHessian);  
Mat descriptors_object;
 SurfDescriptorExtractor extractor;
 detector.detect( img_object, keypoints_object); 
 extractor.compute( img_object, keypoints_object,descriptors_object ); 
 for (int index = 0; index < listOfObjectsExtracted.size(); index ++)
 {
         Mat partOfImageScene = listOfObjectsExtracted[index];
         vector<Point2f> listOfContourPoints = convertPointsToPoints2f(realContoursOfRects[index]);
         vector<KeyPoint> keypoints_scene;
         KeyPoint::convert(listOfContourPoints, keypoints_scene, 100, 1000);
         //detector.detect( partOfImageScene, keypoints_scene );
         if (keypoints_scene.size() > 0)
         {
             //-- Step 2: Calculate descriptors (feature vectors)
             Mat descriptors_scene;
             extractor.compute( partOfImageScene, keypoints_scene, descriptors_scene );
            //Logic of matching between descriptors_scene and descriptors_object 
         } 
}

So, after calling compute in Step 2, the keypoints_scene most of the times becomes empty. I know they state the following in OpenCV documentation:

Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count).

But anyway to get better results? I mean to have descriptors for all the points I've chosen? Am I violating the way the keypoints should be used? Should I try different feature extractor than SIFT/SURF to get what I want? Or it's expected to have the same kind of problem with every feature detector implemeted in OpenCV?

EDITED:

I'm using the method KeyPoint::convert to convert from points to keypoints and I'm passing 100 as size and 1000 as response. Below you can see the details of that method:

//! converts vector of points to the vector of keypoints, where each keypoint is assigned the same size and the same orientation
    static void convert(const vector<Point2f>& points2f,
                        CV_OUT vector<KeyPoint>& keypoints,
                        float size=1, float response=1, int octave=0, int class_id=-1);

As size, 100 seems to me fine, no? If not, any way to get the best value that fits my case? or it's just empirically?

EDITED: The size of the image is 1920*1080, here is a sample

And most of the times they are near to the border of the images. Any problem with this?

1
what keypoint "size" or "scale" did you give your custom keypoints? Depending on that value, the size of the descriptor window is computed. If it is too big (or maybe too small), the descriptor can't be computed.Micka
Edited question, @MickaMaystro
can you try size = 1? not sure how size exactly is defined in SIFT/Surf (e.g. how the descriptor window size is computed from keypoint size), but maybe it were values like 8, 16 etc but I really don't remember which octave that was...Micka
well... some of the contours are close to the image border too... the number of removed keypoints does increase if you increase the size of the keypoints?Micka
descriptors have some size. within that size, pixel values are used for description. If the extent of the descriptor around the keypoint goes out of your image, there are no pixel values for that descriptor region, so no "good" descriptor could be computed. computing a descriptor is like asking: "how does the MxN neighborhood of the keypoint look like?"Micka

1 Answers

1
votes

I figured it out. The problem was in the way I'm computing the descriptors because as you can see in the code above, I was trying to compute the descriptors on small part of the image and not on the image itself. So when I put the image itself instead of partOfImageScene, something like extractor.compute( img_scene, keypoints_scene, descriptors_scene ); it worked perfectly and I didn't lose any keypoints from the list I had.