1
votes

I have this raw image in grayscale:

enter image description here

I would like to detect the edge of the object. However, it is affected by illumination near the edge. This is what I obtained after Gaussian blur and Canny edge detection:

enter image description here

This is my code:

    cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY);       // convert to grayscale

    cv::GaussianBlur(crop,                  // input image
        imgBlurred,                         // output image
        cv::Size(5, 5),                     // smoothing window width and height in pixels
        5);                                 // sigma value, determines how much the image will be blurred

    cv::Canny(imgBlurred,           // input image
        imgCanny,                   // output image
        0,                          // low threshold
        100);                       // high threshold

The light source is beneath the object. The illumination at the edge of object is from the light source or reflection of light. They are always at the same place.

The illumination is detected as edge as well. I have tried several other ways such as connected component labelling and binarize image with sample code (a beginner here) but to avail. Is there any way to detect clean edge illumination?

2
You need to do some pre-processing to mask that out. Can you tell us more about that illumination? Is it always in the same place? Can you get some baseline images with no object?Dan Mašek
Hi @DanMašek, I tried to binarize the image or use Otsu method but failed. The light source is beneath the object. The illumination at the edge of object is from the light source or reflection of light. They are always at the same place. As the object moves, it covers the light source and illumination at edge occurs.Samuel
Try to do adaptive thresholding or CLAHE before canny edgeRick M.

2 Answers

1
votes

The background light patches may be removeable using some erosion with a fairly large kernel, since the object is much bigger than the light patches

Another common technique you can try is using distance transform + watershed. The distance transform will likely return points that you are certain are inside the object (since the object has little dark areas). Watershed will try to find regions that are connected (by comparing gradients) to the confirmed points. You may need to combine multiple regions after the watershed if the distance transform gives multiple points inside the object.

0
votes

It is impossible to completely get rid of this problem. What an edge detector can detect are intensity variations that result due to edges in the objects. Given the lighting you have there, the variations caused by lighting are quite prominent.

I would suggest two approaches to solve this problem:

  1. Adjust the lighting, if you are able to Getting the lighting right solves 50% of any computer vision problem.

  2. Use any knowledge that you have about the image, background or lighting to remove unnecessary edges. If the camera is stationary, background subtraction can remove edges resulting from background. If you know the shape, color, etc. of the object, you can remove edges that are not a good fit with the object. If it is too hard to determine the exact properties of the object, you can also train a machine learning system with many photos, to segment the image.