I'm using Surf for landmark recognition. This is the process I thought:
1) save before hand one Surf Descriptor for each landmark
2) A user takes a photo of a landmark (eg building)
3) A Surf Descriptor is computed for this image (the photo)
4) This descriptor is compared against each landmark descriptor stored and the one with the lowest DMatch.distance between the 11 closest Feature Points is choosen as the landmark recognized
5) I want to calculate the rotation and scale-ratio between the image obtained and the landmark image stored.
My understanding is that I can only get this rotation and scale-ratio through keypoints, because the Feature Descriptor is only a Unique Reduced Representation for a Keypoint. As such I would have to save both the keypoints and Feature Descriptors for each landmark. Is that right?
This is what I'm doing right now:
cv::SurfFeatureDetector surf(4000);
..
surf.detect(image1, keypoints1);
surf.detect(image2, keypoints2);
..
cv::SurfDescriptorExtractor surfDesc;
surfDesc.compute(image1, keypoints1, descriptor1);
surfDesc.compute(image2, keypoints2, descriptor2);
..
vector<cv::DMatch> descriptorsMatch;
BruteForceMatcher<cv::L2<float> > brute;
brute.match(desc1, desc2, descriptorsMatch);
//Use only the 11 best matched keypoints;
nth_element( descriptorsMatch.begin(), descriptorsMatch.begin()+10, descriptorsMatch.end() );
descriptorsMatch.erase( descriptorsMatch.begin()+11, descriptorsMatch.end() );
..
for ( .. it = descriptorsMatch.begin(); it != descriptorsMatch.end() .. )
{
distanceAcumulator +=it->distance;
angleAcumulator += abs(keypoints1[it->queryIdx].angle - keypoints2[it->trainIdx].angle) % 180 ;
scaleAcumulator1 +=keypoints1[it->queryIdx].size;
scaleAcumulator2 +=keypoints2[it->trainIdx].size;
}
angleBetweenImages = angleAcumulator/11;
scaleBetweenImages = scaleAcumulator1/scaleAcumulator2;
similarityBetweenImages = distanceAcumulator/11;
..