0
votes

For an economic laboratory experiment I have created images containing dots. I would like to have an computer vision algorithm count the number of dots. I have used OpenCV's HoughCircles to detect (and therefore count) the dots. The code works, but it misses a lot of dots:

import cv2 as cv
import numpy as np

filename = "graph_0.png"
src = cv.imread(cv.samples.findFile(filename), cv.IMREAD_COLOR)
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)

gray = cv.medianBlur(gray, 5)

rows = gray.shape[0]

circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
                    param1=100, param2=0.05,
                    minRadius=1, maxRadius=30)

if circles is not None:
    circles = np.uint16(np.around(circles))
    for i in circles[0, :]:
        center = (i[0], i[1])
        # circle center
        cv.circle(src, center, 1, (0, 100, 100), 3)
        # circle outline
        radius = i[2]
        cv.circle(src, center, radius, (255, 0, 255), 3)

cv.imshow("detected circles", src)
cv.waitKey(0)

# number of detected circles
len(circles[0])

Picture showing identified dots

I think one issue is that the dots merge in many places into a larger blue area. But the algorithm even misses a lot of dots that are entirely surrounded by white space.

I have experimented with different parameters within cv.HoughCircles, for example

circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 32,
                    param1=100, param2=0.05,
                    minRadius=1, maxRadius=30)

where the third input is now rows / 32 instead of rows / 8 leading to this image

Identified dots for "rows / 32"

This leads to a higher number, which is closer to the actual number of dots, but it mostly missidentifies dots.

What can I do to increase the number of correctly identified dots? Should I change something within cv.HoughCircles or do I have to include more transformations beforehand?