3
votes

I'm working on a project in which I have to detect Traffic lights (circles obviously). Now I am working with a sample image I picked up from a spot, however after all my efforts I can't get the code to detect the proper circle(light).

Here is the code:-

# import the necessary packages  
import numpy as np  
import cv2

image = cv2.imread('circleTestsmall.png')
output = image.copy()
# Apply Guassian Blur to smooth the image
blur = cv2.GaussianBlur(image,(9,9),0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 200)

# ensure at least some circles were found
if circles is not None:
    # convert the (x, y) coordinates and radius of the circles to integers
    circles = np.round(circles[0, :]).astype("int")

# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
    # draw the circle in the output image, then draw a rectangle
    # corresponding to the center of the circle
    cv2.circle(output, (x, y), r, (0, 255, 0), 4)
    cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)

# show the output image
cv2.imshow("output", output)
    cv2.imshow('Blur', blur)
cv2.waitKey(0)

The image in which I want to detect the circle- Image. The circle I wanna detect is highlighted.

This is what the output image is:- Output Image.

I tried playing with the Gaussian blur radius values and the minDist parameter in hough transform but didn't get much of success.
Can anybody point me in the right direction?

P.S- Some out of topic questions but crucial ones to my project-
1. My computer takes about 6-7 seconds to show the final image. Is my code bad or my computer is? My specs are - Intel i3 M350 2.6 GHz(first gen), 6GB RAM, Intel HD Graphics 1000 1625 MB.
2. Will the hough transform work on a binary thresholded image directly?
3. Will this code run fast enough on a Raspberry Pi 3 to be realtime? (I gotta mount it on a moving autonomous robot.)

Thank you!

3
If it takes 6-7 seconds on your desktop for ONE picture, how do you expect it to work on the much lighter Raspberry in real time, probably taking 10 pictures a second? So you probably need to optimize.JeD
It might be faster on raspberry since you don't need to draw the circles, but stillJeD
There is nothing wrong with your computer and your code is probably OK. Hough transform takes a long time. If you look at what it does under the hood, it will make sense why. It was never intended to be a realtime filter. And yes, you should apply it to the binary threshold image directly.Mad Physicist
@JeD the real purpose of the code is to detect a traffic light in a binary thresholded image, and as soon as the light goes off, send a command to a connected arduino to run the motors. That's it. The robot will be stationary at the time of detection. Moreover can you please suggest some optimizations?user6026311
@MadPhysicist Very well, but how do I get it to detect the circle in this image?user6026311

3 Answers

2
votes

First of all you should restrict your parameters a bit.

Please refer to: http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#houghcircles

At least set reasonable values for min and max radius. Try to find that one particular circle first. If you succeed increase your radius tolerance.

Hough transform is a brute force method. It will try any possible radius for every edge pixel in the image. That's why it is not very suitable for real time applications. Especially if you do not provide proper parameters and input. You have no radius limits atm. So you will calculate hundreds, if not thousands of circles for every pixel...

In your case the trafficlight also is not very round so the accumulated result won't be very good. Try finding highly saturated, bright, compact blobs of a reasonable size instead. It should be faster and more robust.

You can further reduce processing time if you restrict the image size. I guess you can assume that the traffic light will always be in the upper half of your image. So omit the lower half. Traffic lights will always be green, red or yellow. Remove everything that is not of that color... I think you get what I mean...

2
votes

I think that you should first perform a color segmentation based on the stoplight colors. It will tremendously reduce the ROI. Then you can apply the Hough Transform on the ROI edges only (because you want the contour).

0
votes

Another restriction: Only accept circles where the inside color is homogenous.This would throw out all the false hits in the above example.