1
votes

I want to use OpenCV Python to do SIFT feature detection on remote sensing images. These images are high resolution and can be thousands of pixels wide (7000 x 6000 or bigger). I am having trouble with insufficient memory, however. As a reference point, I ran the same 7000 x 6000 image in Matlab (using VLFEAT) without memory error, although larger images could be problematic. Does anyone have suggestions for processing this kind of data set using OpenCV SIFT?

OpenCV Error: Insufficient memory (Failed to allocate 672000000 bytes) in cv::OutOfMemoryError, file C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp, line 55 OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file

(I'm using Python 2.7 and OpenCV 3.4 in the Spyder IDE on a Windows 64-bit with 32 GB of RAM.)

2
Welcome at SO. Nice question... please finish the tour, post your code and elaborate a little about what you tried to solve it yourself (e.g. links where you found something but didn't work) and enjoy SO ;-)ZF007
@ZF007 - not really relevent to this Q. There is obviously something inside the opencv lib not optomized for images with 100x as many pixels as a typical video frame. Posting a bunch of wrapper code is not going to help.Martin Beckett
check my answer for reply to your question Martin.ZF007
Hello Rebecca ! Have you found any solution to this problem?Nouman Ahsan

2 Answers

1
votes

I would split the image into smaller windows. So long as you know the windows overlap (I assume you have an idea of the lateral shift) the match in any window will be valid.

You can even use this as a check, the translation between feature points in any part of the image must be the same for the transform to be valid

0
votes

There are a few flavors how to process SIFT corner detection in this case:

  1. process single image per unit/time one core;
  2. multiprocess 2 or more images /unit time on single core;
  3. multiprocess 2 or more images/unit time on multiple cores.

Read cores as either cpu or gpu. Threading result in serial processing instead of parallel.

As stated Rebecca has at least 32gb internal memory on her PC at her disposal which is more than sufficient for option 1 to process at once.

So in that light.. splitting a single image as suggested by Martin... should be a last resort in my opinion.

Why should you avoid splitting a single image in multiple windows during feature detection (w/o running out of memory)?

Answer:

If a corner is located at the spilt-side of the window and thus becomes unwillingly two more or less polygonal straight-line-like shapes you won't find the corner you're looking for, unless you got a specialized algorithm to search for those anomalies.

In casu:

In Rebecca's case its crucial to know which approach she took on processing the image(s)... Was it one, two, or many more images loaded simultaneously into memory?

If hundreds or thousands of images are simultaneously loaded into memory... you're basically choking the system by taking away its breathing space (in the form of free memory). In addition, we're not talking about other programs that are loaded into memory and claim (reserve) or consume memory space for various background programs. That comes on top of the issue at hand.

Overthinking:

If as suggested by Martin there is an issue with the Opencv lib in handling such amount of images as described by Rebecca.. do some debugging and then report your findings to Opencv, post a question here at SO as she did... but post also code that shows how you deal with the image processing at the start; as explained above why that is important. And yes as Martin stated... don't post wrappers... totally pointless to do so. A referral link to it (with possible version number) is more than enough... or a tag ;-)