3
votes

I am importing an image from a video frame, using cv2.resize() to enlarge the image by 4X, and then using Canny edge detection to help remove some noise before doing some object tracking. However Canny edge detection kept giving me a black image.

After much testing I found that using cv2.resize() to reduce the image size to 1/4th before Canny edge detection gave me the result I was hoping for. Reducing image size to a 1/3rd also gave me a much better result but had fewer edges than the 1/4th reduction, and scaling down the image to 1/16th gave more edges than scaling to 1/4th. Why would this be happening? Actually while writing this question I was resizing the unscaled result and I found that calling namedWindow and cv.WINDOW_NORMAL also improved it.

I realize I can simply rescale down, run Canny detection, and then enlarge the result of the Canny edge detection and do my object tracking, but this is baffling me and knowing why this is happening would be of interest to myself and I think to others as well. Nothing I could find in the opencv docs suggested a dependence of the Canny algorithm on image size.

See images below, all generated by output = cv2.Canny(input, 30, 50):

Unscaled (improved by using cv.WINDOW_NORMAL) https://i.imgur.com/uG93Dhd.png

1/4 Reduced Before Canny Detection https://i.imgur.com/dQP9bxB.png

1/3 Reduced Before Canny Detection https://i.imgur.com/MkSpaT5.png

1/16 Reduced before Canny Detection https://i.imgur.com/SbpPkYP.png

1
Is it an image display thing? If the display scales down the binary image, it shows only a fraction of the pixels, likely not showing most of the edge pixels. Otherwise, it could be that upscaling the image 4x causes edges to be smoother, and therefore needing a lower threshold to detect.Cris Luengo
Your results are part of the reason why it's suggested to blur the image a bit before running Canny. Downsampling is somewhat equivalent to blurring the image. But how much should you blur an image? What kernel size should you use? These are hyperparameters on top of the Canny parameters, which are roughly equivalent to asking how much you should down/upsample an image. Canny tries to be, but is not a scale-invariant algorithm, as some parts of the algorithm (like the non-maximum suppression step) are definitely local operations. Anyways, why would you enlarge the image for processing?alkasm

1 Answers

1
votes

By resizing you change the size of the features. But as you don't change the filter size, the results differ. You are actually exploring scale-space.

Also note that the resize function doesn't prefilter the image and causes aliasing.