1
votes

I am looking into code in Features2D + Homography to find a known object OpenCV tutorial..

I didn't understand clearly, what is the distance variable in matcher class. Is it the distance between pixels of matching keypoints in both images ?

This QA say its similarity measure (either Euclidean distance or Hamming distance incase of binary descriptors) and calculated from distance between descriptor vectors.

Can some body share info how this distance is calculated or how to match key points without using existing matchers from OpenCV.

 //-- Step 3: Matching descriptor vectors using FLANN matcher
  FlannBasedMatcher matcher;
  std::vector< DMatch > matches;
  matcher.match( descriptors_object, descriptors_scene, matches );

  double max_dist = 0; double min_dist = 100;

  //-- Quick calculation of max and min distances between keypoints
  for( int i = 0; i < descriptors_object.rows; i++ )
  { double dist = matches[i].distance;  // --> What Distance indicate here
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
  }

enter image description here

Thanks.

1
it is the distance between the descriptors, not the distance between the keypointsberak
@berak can you give more info how to calculate it. Because i like to do keypoint matching without using existing matchers.nayab

1 Answers

1
votes

I have come up with some problem while I was working on real time object matching using SIFT feature detector. Here is my solution on video.

First I have created a struct to store matched keypoints.The struct contains location of keypoint in templateImage,location of keypoint in inputImage and similarity measure.Here I have used cross correlation of vectors as a similarity measure.

struct MatchedPair
    {
        Point locationinTemplate;
        Point matchedLocinImage;
        float correlation;
        MatchedPair(Point loc)
        {
            locationinTemplate=loc;
        }
    }

I will select sort the matched keypoints according to their similarity so I will need a helper function that will tell std::sort() to how to compare my MatchedPair objects.

bool comparator(MatchedPair a,MatchedPair b)
{
        return a.correlation>b.correlation;
}

Now the main code starts. I have used standard method to detect and descrypt features from both input image and templateImage.After computing features I have implemented my own matching function.This is the answer you are looking for

 int main()
    {
        Mat templateImage = imread("template.png",IMREAD_GRAYSCALE); // read a template image
        VideoCapture cap("input.mpeg"); 
        Mat frame; 

        vector<KeyPoint> InputKeypts,TemplateKeypts; 
        SiftFeatureDetector detector;
        SiftDescriptorExtractor extractor;
        Mat InputDescriptor,templateDescriptor,result; 
        vector<MatchedPair> mpts; 
        Scalar s;
        cap>>frame; 
        cvtColor(image,image,CV_BGR2GRAY);
        Mat outputImage =Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1);
        detector.detect(templateImage,TemplateKeypts); // detect template interesting points
        extractor.compute(templateImage,TemplateKeypts,templateDescriptor); 

        while( true) 
        {
            mpts.clear(); // clear for new frame
            cap>>frame;  // read video to frame
            outputImage=Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1); // create output image 
            cvtColor(frame,frame,CV_BGR2GRAY);
            detector.detect(frame,InputKeypts);
            extractor.compute(frame,InputKeypts,InputDescriptor); // detect and descrypt frames features

            /*
                So far we have computed descriptors for template and current frame using traditional methods
                From now onward we are going to implement our own match method

     - Descriptor matrixes are by default have 128 colums to hold features of a keypoint.    
     - Each row in descriptor matrix represent 128 feature of a keypoint.

 Match methods are using this descriptor matrixes to calculate similarity.

My approach to calculate similarity is using cross correlation of keypoints descriptor vector.Check code below to see how I achieved.
        */

   // Iterate over rows of templateDesciptor ( for each keypoint extracted from     //  template Image)   i keypoints in template,j keypoints in input
            for ( int i=0;i<templateDescriptor.rows;i++)
            {
                mpts.push_back(MatchedPair(TemplateKeypts[i].pt));
                mpts[i].correlation =0;
                for ( int j=0;j<InputDescriptor.rows;j++)
                {
                    matchTemplate(templateDescriptor.row(i),InputDescriptor.row(j),result,CV_TM_CCOR_NORMED);
 // I have used opencvs built function to calculate correlation.I am calculating // row(i) of templateDescriptor with row(j) of inputImageDescriptor.
                    s=sum(result); // sum is correlation of two rows
// Here I am looking for the most similar row in input image.Storing the correlation of best match and matchLocation in input image.
                    if(s.val[0]>mpts[i].correlation)
                    {
                       mpts[i].correlation=s.val[0];
                       mpts[i].matchedLocinImage=InputKeypts[j].pt;
                    }
                }

            }

// I would like to show template,input and matching lines in one output.            templateImage.copyTo(outputImage(Rect(0,0,templateImage.cols,templateImage.rows)));
            frame.copyTo(outputImage(Rect(templateImage.cols,templateImage.rows,frame.cols,frame.rows)));

  // Here is the matching part. I have selected 4 best matches and draw lines         // between them. You should check for correlation value again because there can // be 0 correlated match pairs.

            std::sort(mpts.begin(),mpts.end(),comparator);
            for( int i=0;i<4;i++)
            {

                if ( mpts[i].correlation>0.90)
                {
// During drawing line take into account offset of locations.I have added 
// template image to upper left of input image in output image.  
            cv::line(outputImage,mpts[i].locationinTemplate,mpts[i].matchedLocinImage+Point(templateImage.cols,templateImage.rows),Scalar::all(255));
                }
            }
            imshow("Output",outputImage);
            waitKey(33);
        }

    }