3
votes

I am working on contour detection for the image below, however due to the lightning condition, the detection is not complete where the image displays glares. I am trying to remove them in order to get a better contour detection.

Here is the original image

image with glare

And here is the grayed + thresholded image on which the cv2.connectedComponentsWithStats is ran to detect the objects. I have boxed the areas where I need to reduce exposure. (since I am using an inverse THRESH_BINARY_INV filter those areas appear black).

grayed + thresholded image

As you can see hereafter the object detected areas are incomplete, the cv2.connectedComponentsWithStats will not detect the complete area for the object

Object areas

And then of course the contour itself which is calculated on the cropped outlined component is wrong as well:

Cropped outlined

So of course the contour itself is wrong:

Wrong contour due to glare

Here is what I have done so far:

def getFilteredContours(image, minAreaFilter=20000) -> np.array:
    ret = []
    ctrs,_ = cv2.findContours(image, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    ctrs = sorted(ctrs, key=cv2.contourArea, reverse=True)
    for i, c in enumerate(ctrs):
        # Calculate the area of each contour
        area = cv2.contourArea(c)
        if area < minAreaFilter:
            break
        ret.append(c)
    return ret

birdEye = cv2.imread(impath)

gray = cv2.cvtColor(birdEye, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
threshImg = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY_INV)[1]
(numLabels, labels, stats, centroids) = cv2.connectedComponentsWithStats(
    threshImg, 4, cv2.CV_32S)

#then for each identified component we extract the component and get the contour

filteredIdx = getFilteredLabelIndex(stats)

for labelId in filteredLabelId:
    componentMask = (labels == i).astype("uint8") * 255
    ctrs, _ = cv2.findContours(image, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    ctrs = sorted(ctrs, key=cv2.contourArea, reverse=True)
    ctr = max(ctrs, key=cv2.contourArea)    
    cv2.drawContours(birdEye, [cntrs], -1, (255, 0, 255), 3)

cv2.imshow("original contour", birdEye)

cv2.waitKey(0)
cv2.destroyAllWindows()

Any suggestions would be welcome,

Thanks

Pat

2
Contour detection function takes a binary image as input. Do you know how can you obtain a binary image? - Burak
Yes, indeed I have updated my question - user2097439
I suggest you look into diffuse lighting so that your images have reduced glare. - fmw42

2 Answers

1
votes

You may use floodFill for filling the background first.

cv2.floodFill gives good result applying your sample image.
Result is good because the background is relatively homogenous.
floodFill uses the color information, opposed to other algorithms that use only the brightness.
The background has a slight brightness gradient that "flood fill" algorithm handles well.

You may use the following stages:

  • Replace all (dark) values (below 10 for example) with 10 - avoiding issues where there are black pixels inside an object.
  • Use cv2.floodFill for filling the background with black color.
    Use the top left corner as a "background" seed color (assume pixel [10,10] is not in an object).
  • Convert to Grayscale.
  • Apply threshold - convert all pixels above zero to 255.
  • Use opening (morphological operation) for removing small outliers.
  • Find contours.

Code sample:

import cv2
import numpy as np

birdEye = cv2.imread(r"C:\Rotem\tools.jpg")

# Replace all (dark) values below 10 with 10 - avoiding issues where there are black pixels inside an object
birdEye = np.maximum(birdEye, 10)

foreground = birdEye.copy()

seed = (10, 10)  # Use the top left corner as a "background" seed color (assume pixel [10,10] is not in an object).

# Use floodFill for filling the background with black color
cv2.floodFill(foreground, None, seedPoint=seed, newVal=(0, 0, 0), loDiff=(5, 5, 5, 5), upDiff=(5, 5, 5, 5))

# Convert to Grayscale
gray = cv2.cvtColor(foreground, cv2.COLOR_BGR2GRAY)

# Apply threshold
thresh = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)[1]

# Use opening for removing small outliers
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)))

# Find contours
cntrs, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

# Draw contours
cv2.drawContours(birdEye, cntrs, -1, (255, 0, 255), 3)

# Show images for testing
cv2.imshow('foreground', foreground)
cv2.imshow('gray', gray)
cv2.imshow('thresh', thresh)
cv2.imshow('birdEye', birdEye)

cv2.waitKey()
cv2.destroyAllWindows()

foreground:
foreground

birdEye output:
birdEye output

0
votes

My suggestion is using dilation and erosion function(or closing function) in cv2.

If you use cv2.dilate function, white area is bigger than now.

Conversely, if you use cv2.erode function, white area is smaller than now.

This iteration remove noise of black area.

Closing function is dilation followed by erosion.

See https://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html