I am implementing Canny edgedetection myself in Python, and I am stuck at the non maxima suppression part. I thought I had the code right, but I get a really bad outcome with no pretty lines at all. I used the Canny edge detection by Skimage to compare, and below is the result (left: magnitude of the gradient by Sobel filters, middle: after non maxima suppression, right: result of skimage Canny (and I know this is after thresholding and hysteresis, but I expect that the results after non maxima suppression should be comparable to the skimage Canny outcome)).
Here is the code I used for the non maxima suppression:
def non_max_sup8(magn,direct):
size=magn.shape
out=np.zeros_like(magn)
direct=np.rad2deg(direct)+180 # direction is now between 0 and 360
for i in range(1,size[0]-1):
for j in range(1,size[1]-1):
if 0<=direct[i,j]<22.5 or 337.5<=direct[i,j]<=360 or 157.5<=direct[i,j]<202.5:
before=magn[i,j-1] # compare to left and right
after=magn[i,j+1]
elif 22.5<=direct[i,j]<67.5 or 202.5<=direct[i,j]<247.5:
before=magn[i+1,j-1] # compare diagonally
after=magn[i-1,j+1]
elif 67.5<=direct[i,j]<112.5 or 247.5<=direct[i,j]<292.5:
before=magn[i+1,j] # compare above and under
after=magn[i-1,j]
else:
before=magn[i-1,j-1] # compare diagonally
after=magn[i+1,j+1]
if magn[i,j]>=before and magn[i,j]>=after:
out[i,j]=magn[i,j]
return out
The gradients are calculated as follows and should be right. (The direction is in radians between -pi and pi and are in the code above transformed to 0 to 360 degrees)
def edges(img,filterv=vSobel,filterh=hSobel):
height,width = img.shape
magn=np.zeros_like(img)
direc=np.zeros_like(img)
X=np.zeros_like(img)
Y=np.zeros_like(img)
for y in range(3,height-2):
for x in range(3,width-2):
box = img[x-1:x+2,y-1:y+2]
transformv = filterv * box
Gy = transformv.sum()/4
transformh = filterh * box
Gx = transformh.sum()/4
X[x,y] = Gx
Y[x,y] = Gy
magn[x,y]=np.sqrt(Gx**2+Gy**2)
direc[x,y]=np.arctan2(Gy,Gx)
return X,Y,magn,direc
So, my question is: what am I doing wrong? I am thinking the following, but I would like to hear your thoughts: Suppose the direction is 90degrees (so upwards), then I thought you actually want to compare to the pixel right and left instead of below and above (as I did up till now), since then you get a single pixel edge. However I based my code on things I found on the internet and they seem to all do something like above: comparing to the pixels in the gradient direction instead of in the normal direction. What do you think about this? Am I misunderstanding something, or is my idea not so bad?
EDIT: trying the above (comparing the pixel with the two pixels in the normal direction, instead of the direction of the pixel) gives a result similar to the canny edge detection of the skimage.
Regards.

Gx = transformvlooks wrong. The "v" is for vertical, no? And the "x" is horizontal, no? Your result is consistent with a 90 degree rotation of the directions. I recommend that you plot and examine the direction image and compare that with your expectation according to the edges in the image. - Cris Luengofiltervwas actually the filter used to compute Gx, so that it cancels the mistake out haha! I will change it in my question :) - Katiedirect, to make sure they match your expectations. You might have to use-directordirect+90or90-director some such transformation to make the angles match the direction across the edges. - Cris Luengo