My goal to for an autonomous robot to navigate a walled mazed using a camera. The camera is fixed atop the robot and facing down to be able to view the walls and the robot from above.
The approach I took that seemed most straightforward from my experience was to
- Threshold the image to extract the red walls
- Perform Canny edge detection
- Use the Hough transform the detect the strong lines from the edges
as seen below after some parameter tweaking
I want to have the robot move forward and avoid "hitting" the red walls. The problem is that there are multiple lines detected per wall edge from the hough transform. One idea I had was to perform k-means clustering to cluster the lines and find the centers (means) of each cluster, but I do not know the number of wall edges (and therefore number of clusters to input to the k-means algorithm) I will have at any time in navigating the maze (walls ahead, behind, multiple turn intersections, etc.).
Any help in finding a good way to have a consistent wall location to compare the robot's location (which is always fixed in every image frame) to at any time in navigating the maze would be greatly appreciated. I'm also open to any other approach to this problem.