3
votes

On a recent set of images, my OpenCV code stopped finding the correct area of a contour. This appears to happen when the contour is not closed. I have tried to ensure the contour is closed to no avail.

Edit: The problem is that there are gaps in the contour.

Background: I have a series of images of a capsule in a channel and I want to measure the area of the shape as well as the centroid from the moments.

Problem: When the contour is not closed, the moments are wrong.

Edit: When I have gaps, the contour is not of the whole shape and hence the incorrect area.

What I do:

  • Read image -> img =cv2.imread(fileName,0)
  • apply Canny filter -> edges = cv2.Canny(img,lowerThreshold,lowerThreshold*2)
  • find contours -> contours, hierarchy = cv2.findContours(edges,cv2.cv.CV_RETR_LIST,cv2.cv.CV_CHAIN_APPROX_NONE)
  • find longest contour
  • ensure contour is closed
  • find moments -> cv2.moments(cnt)

A working example with test images can be found here.

There is a question regarding closing a contour but neither of the suggestions worked. Using cv2.approxPolyDP does not change the results, although it should return a closed contour. Adding the first point of the contour as the last, in order to make it closed, also does not resolve the issue.

An example of an image with the contour draw on it is below. Here, the area is determined as 85 while in an almost identical image it is 8660, which is what it should be. http://www.negative-probability.co.uk/docs/ImageWContour_0.png

Any advice would be appriciated.

Code:

img =cv2.imread(fileName,0)
edges = cv2.Canny(img,lowerThreshold,lowerThreshold*2)
contours, hierarchy = cv2.findContours(edges,cv2.cv.CV_RETR_LIST,cv2.cv.CV_CHAIN_APPROX_NONE) #cv2.cv.CV_CHAIN_APPROX_NONE or cv2.cv.CV_CHAIN_APPROX_SIMPLE

#Select longest contour as this should be the capsule
lengthC=0
ID=-1
idCounter=-1
for x in contours:
    idCounter=idCounter+1 
    if len(x) > lengthC:
        lengthC=len(x)
        ID=idCounter

if ID != -1:
    cnt = contours[ID]
    cntFull=cnt.copy()

    #approximate the contour, where epsilon is the distance to 
    #the original contour
    cnt = cv2.approxPolyDP(cnt, epsilon=1, closed=True)

    #add the first point as the last point, to ensure it is closed
    lenCnt=len(cnt)
    cnt= np.append(cnt, [[cnt[0][0][0], cnt[0][0][1]]]) 
    cnt=np.reshape(cnt, (lenCnt+1,1, 2))

    lenCntFull=len(cntFull)
    cntFull= np.append(cntFull, [[cntFull[0][0][0], cntFull[0][0][1]]]) 
    cntFull=np.reshape(cntFull, (lenCntFull+1,1, 2))

    #find the moments
    M = cv2.moments(cnt)
    MFull = cv2.moments(cntFull)
    print('Area = %.2f \t Area of full contour= %.2f' %(M['m00'], MFull['m00']))
3
Good: you searched for previous question and found something related, and mentioned this in your question. Bad: you simply say that the suggestions did not work. Why did they not work? What have you tried? Right now, the answer I would give you is exactly the same as for the previous question: Make sure that your contour is closed around the whole object, for example by dilation or convex hull. If the border has gaps in it, the area will always be wrong. Also, please include an unzipped and processed image (i.e. with your contour drawn in it) to reach the most potential answerers.HugoRune
To address some misunderstandings: findContours will always return a closed contour. ApproxPoly or adding the first point at the end will not change this. Your problem is not that the contour is not closed, your problem is that the contour closes over the wrong area, i.e. if you pass a canny edge image to findContours that contains gaps, the found contour will be closed, but the area it contains will be only the edges itself, not the interior. For starters, I would avoid canny and use a simple thresholding before findContours.HugoRune
Upon rereading the previous question, I think the question is somewhat misleading. As I said, I am pretty sure findContours returns a closed contour. If you zoom in on your image, i think you will find that in the wrong cases, the contour runs twice along the border of the object, once on the outside and once on the inside, so that it contains the whole border of your object, but not its interior. A convex hull over the object would solve this, if ther canny edge image contains only a single gap. Dillation of the canny edge image will close any number of small small gaps.HugoRune
@NegativeProbability can you draw the single points as single pixel instead of small circles? hard to see whether there are gaps. If you use the "drawContours" (filled) function instead, you'll see how openCV interprets the contours, so you might get an impression why your area computation fails.Micka
@Micka Here a link HugoRune was correct, the problem is that there are gaps in the contour. I will ammend the question to make this clear.Edgar H

3 Answers

2
votes

My problem was, as @HugoRune pointed out, that there are gaps in the countour. The solution is to close the gaps.

I found it difficult to find a general method to close the gaps, so I iterativly change the threshold of the Canny filter and performing morphological closing until a closed contour is found.

For those struggeling with the same problem, there are several good answers how to close contours, such as this or this

1
votes

Having dealt with a similar problem, an alternative solution (and arguably simpler with less overhead) is to use the morphology opening functionality, which performs an erosion followed by a dilation. If you turn this into a binary image first, perform the opening operation, and the do the Canny detection, that should do the same thing, but without having to iterate with the filter. The only thing you will have to do is play with the kernel size a couple times to identify an appropriate size without losing too much detail. I have found this to be a fairly robust way of making sure the contours are closed.

Morphological operations documentation

0
votes

An alternate approach is to use the contour points to find the area. Here nContours has previously been found thru cvFindContours(). I have used MFC CArray here. You can use std::vector alternatively.

////////////////////////////////////////////

CvSeq* MasterContour = NULL;
if (nContours > 1)
{
    // Find the biggest contour
    for (int i = 0; i < nContours; i++)
    {
        CvRect rect = cvBoundingRect(m_contour, 1);
        if (rect.width > rectMax.width)
            MasterContour = m_contour;
        if (m_contour->h_next != 0)
            m_contour = m_contour->h_next;
        else
            break;
    }
}
else
    MasterContour = m_contour;

arOuterContourPoints.RemoveAll();
CArray<CPoint, CPoint> arOuterTrackerPoints;
for (int i = 0; i < MasterContour->total; i++)
{       
    CvPoint *pPt; 
    pPt = (CvPoint *)cvGetSeqElem(MasterContour, i);
    arOuterContourPoints.Add(CPoint(pPt->x, pPt->y));
}
int nOuterArea = 0;
for (int i = 0; i < arOuterContourPoints.GetSize(); i++)
{
    if (i == (arOuterContourPoints.GetSize() - 1))
        nOuterArea += (arOuterContourPoints[i].x * arOuterContourPoints[0].y - arOuterContourPoints[0].x * arOuterContourPoints[i].y);
    else
        nOuterArea += (arOuterContourPoints[i].x * arOuterContourPoints[i+1].y - m_arOuterContourPoints[i+1].x * m_arOuterContourPoints[i].y);
}
nOuterAreaPix = abs(nOuterArea / 2.0);

/////////////////////////////////////////////////////////////