2
votes

Every implementation of SURF I have come across on the web seems to be particularly bad at extracting a useful number of interest points from small images (say, 100x100 or less).

I have tried a number of approaches:

1) Using various upscaling algorithms (from simple one like nearest-neighbor to more advanced ones - basically every upscaler imagemagick provides) to increase the size of small images before analysis.

2) Other image processing tweaks to bring out features in the image such as contrast enhancement and the use of different RGB weights in the computation of the integral image.

3) (Re-)compression, on the assumption that compression artifacts will appear primarily around existing features, increasing their relative "surface area."

However, none of these has had any measurable effect on the number of interest points extracted from small images.

Is there anything else worth trying? Or is SURF just bad at small images, period? If so, what other algorithms are better for those?

1
To help you, first be clear about whether you are aware of how the algorithm for SURF works (i.e., you have read and understood the paper that presented it originally) or whether you have only used implementations because the algorithm seemed good for some task you are doing. I'm almost certain it is the second case, and in that case I'd recommend reading about how SURF actually works. Then you will get a better idea on how to set parameters for it, and might decide that SIFT is better or not for your case. It also might be the case that there are other better approaches for the unknown task.mmgp
I have read a number of papers and also compared the source code of several different implementations. I tweaked the parameters that seemed relevant to improving pickup on small images - for example, I reduced the pixel step size and also experimented with making it relative to the size of the image. Unfortunately this was not effective.Ben Englert
Because this is about the general SURF process and not code or a specific framework, you might want to try asking over at dsp.stackexchange.com . There have been several questions along these lines over there. It might also help if you included an example image that we could use as a reference for how SURF is failing to return good feature points.Brad Larson

1 Answers

2
votes

It depends what you want to do. a 100x100 image does not contain a lot of information. Also, the descriptor area that SURF needs to make a meaningful descriptor is very small for 100x100 images. Depending on what you want to do, try the following:

  1. Use the whole 100 image as the descriptor size. Don't detect interest points at all, but place a single interest point in the centre of the 100x100 image (at 50, 50) and make the descriptor using the whole image data. This can help you to detect similar of those small images. Use upright SURF or orientation independent SURF.
  2. Use the double-image size flag to get more interest points.
  3. Decrease the dimension of the descriptor by using smaller surface and less squares. This actually works quite well for object recognition, but not so well for 3D reconstruction.

To sum it up. It all depends on what you want to achieve. Feel free to drop me a message (maybe on the Github repo of SURF)