As some kind of "holiday project" I'm playing around with OpenCV and want to detect and measure stuff.
Current workflow (early stage - detection):
- Convert to grayscale (cv::cvtColor)
- Apply Adaptive threshold (cv::adaptiveThreshold)
- Apply canny edge detection (cv::Canny)
- Finding contours (cv::findContours)
My outcome is kinda crappy and I'm not sure what's the right direction to go. I already got cvBlob working under my current setup (OSX 10.7.2, Xcode 4.2.1) is it a better way to go? If so, how can I implement it the right way?
Or do I need background subtraction first? I tried that but wasn't able to find contours afterwards
Here's my image:
And that's my output, when I draw my contours back into the first image:
UPDATE
I got it working in my programm and my output looks a bit different …
- (IBAction)processImage:(id)sender
{
cv::Mat forground = [[_inputView image] CVMat];
cv::Mat result = [self isolateBackground:forground];
[_outputView setImage:[NSImage imageWithCVMat:result]];
}
- (cv::Mat)isolateBackground:(cv::Mat &)_image
{
int rh = 255, rl = 100, gh = 255, gl = 0, bh = 70, bl = 0;
cv::cvtColor(_image, _image, CV_RGB2HSV_FULL);
cv::Mat element = getStructuringElement(cv::MORPH_RECT, cv::Size(5, 5));
cv::Mat bgIsolation;
cv::inRange(_image, cv::Scalar(bl, gl, rl), cv::Scalar(bh, gh, rh), bgIsolation);
bitwise_not(bgIsolation, bgIsolation);
erode(bgIsolation, bgIsolation, cv::Mat());
dilate(bgIsolation, bgIsolation, element);
return bgIsolation;
}