4
votes

For exactly the same image

Opencv Code:

img = imread("testImg.png",0);
threshold(img, img_bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat tmp;
img_bwR.copyTo(tmp);
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);

// Get the moment
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
 { mu[i] = moments( contours[i], false ); 

 }

// Display area (m00)
for( int i = 0; i < contours.size(); i++ )
 { 
     cout<<mu[i].m00 <<endl;
     // I also tried the code 
     //cout<<contourArea(contours.at(i))<<endl;  
     // But the result is the same
 }

Matlab code:

Img = imread('testImg.png');
lvl = graythresh(Img);
bw = im2bw(Img,lvl);
stats = regionprops(bw,'Area');
for k = 1:length(stats)
    Area = stats(k).Area; %m00
end

Any one has any thought on it? How to unify them? I think they use different methods to find contours.

I uploaded the test image at the link below so that someone who is interested in this can reproduce the procedure

It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For simplicity, it only has one blob on it. For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value) For Matlab, the area of contour (image moment m00) is 763.

Thanks

1
Can you provide us a link to the image so we can reproduce the results on our end? You don't have enough reputation to upload an image, so link to a public sharing website, and I will modify your post. You're probably right with regards to how OpenCV and MATLAB finds contours. BTW, did you actually take note of how many contours were produced between them both? Are they the same amount?rayryeng
Hi rayryeng! Thanks so much for your fast reply. I uploaded the image on the link below flickr.com/photos/129846799@N07/16072821870 It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value) For Matlab, the area of contours is 763.SimaGuanxing

1 Answers

3
votes

Exist many different definitions of how contours should be extracted from binary image. For example it can be polygon that is the perimeter of white object in a binary image. If this definition was used by OpenCV, then areas of contours would be the same as areas of connected components found by Matlab. But this is not the case. Contour found by findContour() function is the polygon that connects centers of neighbor "edge pixels". Edge pixel is a white pixel that has black neighbor in N4 neighborhood.

Example: suppose you have an image whose size is 100x100 pixels. Every pixel above the diagonal is black. Every pixel below or on the diagonal is white (black triangle and white triangle). Exact separation polygon will have almost 200 vertexes at distance of 1 pixel: (0,0), (1,0), (1,1), (2,1), (2,2),.... (100,99), (100,100), (0,100). As you can see this definition is not very good from practical point of view. Polygon returned by OpenCV will have exactly 3 vertexes needed to define the triangle: (0,0), (99,99), (0,99). Its area is (99 x 99 / 2) pixels. It is not equal to number of white pixels. It is not even an integer. But this polygon is more practical than previous one.

Those are not the only possible definitions for polygon extraction. Many other definitions exist. Some of them (in my opinion) may be better than the one used by OpenCV. But this is the one that was implemented and used by a lot of people.

Currently there no effective workaround for your problem. If you want to get exactly same numbers from MATLAB and OpenCV you will have to draw the contours found by foundContours on some black image, and use function moments() on image. I know that upcoming OpenCV 3 have function that finds connected components but I didn't tried it myself.