Actually, I'm trying to match a list of key points extracted from an image to another list of key points extracted from another image. I tried SURF/SIFT to detect the key points but the results were not as expected in terms of accuracy of the keypoints detected from each image. I thought about not using key point detector and just use the points of the connected regions then compute the descriptors of these points using SIFT/SUFT but most of times calling the compute method will empty the keypoint list.
Sample of code below:
int minHessian = 100;
SurfFeatureDetector detector(minHessian);
Mat descriptors_object;
SurfDescriptorExtractor extractor;
detector.detect( img_object, keypoints_object);
extractor.compute( img_object, keypoints_object,descriptors_object );
for (int index = 0; index < listOfObjectsExtracted.size(); index ++)
{
Mat partOfImageScene = listOfObjectsExtracted[index];
vector<Point2f> listOfContourPoints = convertPointsToPoints2f(realContoursOfRects[index]);
vector<KeyPoint> keypoints_scene;
KeyPoint::convert(listOfContourPoints, keypoints_scene, 100, 1000);
//detector.detect( partOfImageScene, keypoints_scene );
if (keypoints_scene.size() > 0)
{
//-- Step 2: Calculate descriptors (feature vectors)
Mat descriptors_scene;
extractor.compute( partOfImageScene, keypoints_scene, descriptors_scene );
//Logic of matching between descriptors_scene and descriptors_object
}
}
So, after calling compute
in Step 2, the keypoints_scene most of the times becomes empty.
I know they state the following in OpenCV documentation:
Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count).
But anyway to get better results? I mean to have descriptors for all the points I've chosen? Am I violating the way the keypoints should be used? Should I try different feature extractor than SIFT/SURF to get what I want? Or it's expected to have the same kind of problem with every feature detector implemeted in OpenCV?
EDITED:
I'm using the method KeyPoint::convert
to convert from points to keypoints and I'm passing 100 as size and 1000 as response. Below you can see the details of that method:
//! converts vector of points to the vector of keypoints, where each keypoint is assigned the same size and the same orientation
static void convert(const vector<Point2f>& points2f,
CV_OUT vector<KeyPoint>& keypoints,
float size=1, float response=1, int octave=0, int class_id=-1);
As size, 100 seems to me fine, no? If not, any way to get the best value that fits my case? or it's just empirically?
EDITED: The size of the image is 1920*1080, here is a sample
And most of the times they are near to the border of the images. Any problem with this?