I'm working on a football(soccer) robot with this tutorial. My machine has 2 cameras while the ground map has 6 points recognizable in front of the goal.
What I'm doing:
- Take photos and get intrinsic parameters(focal length,distortions);
- While moving, find the the points and use solvePnP to get the [R|T];
- Use projectPoints to get its real coordinate.
As I have tried, (in a chessboard of 6*9 inner corners) 10 points worked fine, but with only 5 I get really bad result(haven't even considered when the vision is bad or when the ball blocks its vision).
I'm thinking about presetting a good [R|T](with a good vision of the full map),and while moving, utilize its old [R|T](as initial approx.) to help get a better [R|T]. But after sometime, as [R|T] gets less accurate, it's unlikely that I'll get the approx. value.
Another solution though, it's try using the lines to generate [R|T]. For example, after detecting a set of points of a line, relate it with x=0 or y=0... and use them to compute [R|T] which probably will generate a better result. Is there a way to do this?
Or I'm getting it all wrong? Any ideas or help are appreciated!