5
votes

I'm trying to use OpenCV to detect and extract ORB features from images.

However, the images I'm getting are not normalized (different size, different resolutions, etc...).

I was wondering if I need to normalize my images before extracting ORB features to be able to match them across images?

I know the feature detection is scale invariant, but I'm not sure about what it means for images resolution (for example, 2 images of the same size, with 1 object close, and far in the other should result in a match, even if they have a different scale on the images, but what if the images don't have the same size?).

Should I adapt the patchSize from ORB based on the image size to have (for example if I have an image of 800px and take a patchSize of 20px, should I take a patchSize of 10px for an image of 400px?).

Thank you.

Update: I tested different algorithms (ORB, SURF and SIFT) with high and low resolution images to see how they behave. In this image, objects are the same size, but image resolution is different:

enter image description here

We can see that SIFT is pretty stable, although it has few features. SURF is also pretty stable in terms of keypoints and feature scale. So My guess is that feature matching between a low res and high res images with SIFT and SURF would work, but ORB has much larger feature in low res, so descriptors won't match those in the high res image.

(Same parameters have been used between high and low res feature extraction).

So my guess is that it would be better to SIFT or SURF if we want to do matching between images with different resolutions.

1
Although ORB is "scale-invariant", it isn't as robust as SURF/SIFT, as mentioned in the ORB research paper. You will get different matches (in number/position/orientation) on images with different resolution so using patch size might workRick M.
Yes. I tried to use SIFT/SURF, but unfortunately, they are patented, so OpenCV is throwing me an error, saying I should recompile it with OPENCV_USE_NONFREE but as I installed it with pip, I can't recompile it.whiteShadow
Well, you can get those too actually, see this linkRick M.
Yes. Unfortunately, that would require me to compile OpenCV, which I was trying to avoid. But I guess I don't really have a choice.whiteShadow
Is there an OpenCV example code for SURF or SIFT?Jithin

1 Answers

2
votes

According to OpenCV documentation, ORB also use pyramid to produce multiscale-features. Although details are not clear on this page.
If we look at the ORB paper itself, at section 6.1 it is mentioned that images with five different scales are used. But still we are not sure whether you need to compute descriptors on images with different scale manually, or it is already implemented in OpenCV ORB.
Finally, from source code(line 1063 while I write this answer) we see that images with different resolution is computed for keypoint/descriptor extraction. If you track variables you see that there is a scale factor for ORB class which you can access with getScaleFactor method.

In short, ORB tries to perform matching at different scales itself.