0
votes

I'm working on a football(soccer) robot with this tutorial. My machine has 2 cameras while the ground map has 6 points recognizable in front of the goal.

2 for the corners and 4 around the goal What I'm doing:

  1. Take photos and get intrinsic parameters(focal length,distortions);
  2. While moving, find the the points and use solvePnP to get the [R|T];
  3. Use projectPoints to get its real coordinate.

As I have tried, (in a chessboard of 6*9 inner corners) 10 points worked fine, but with only 5 I get really bad result(haven't even considered when the vision is bad or when the ball blocks its vision).

I'm thinking about presetting a good [R|T](with a good vision of the full map),and while moving, utilize its old [R|T](as initial approx.) to help get a better [R|T]. But after sometime, as [R|T] gets less accurate, it's unlikely that I'll get the approx. value.

Another solution though, it's try using the lines to generate [R|T]. For example, after detecting a set of points of a line, relate it with x=0 or y=0... and use them to compute [R|T] which probably will generate a better result. Is there a way to do this?

Or I'm getting it all wrong? Any ideas or help are appreciated!

1
What you are describing is some kind of sparse slam. I want to know if your robot has two eyes or just one. It seems to me that it only has one eye. Then what does "utilize its old [R|T](as initial approx.) to help get a better [R|T]." mean. Do you mean [R|T] between the initial pos and the current pos?Yang Kui
I mean constantly computing [R|T] since it's a video input.tartaruga_casco_mole

1 Answers

0
votes

I'd like to contribute, but I have some questions about the settings:

(i.) I am assuming that the camera is fixed on the robot. Then, the [R|t] changes with respect to the field (reference). Are you tracking the aforementioned points (features) in the field? In such case their rotation and translation changes when the robot moves around the field.

(ii.) Have you checked Hartley's paper on calibration from line correspondences? This might give you an idea for your second proposal.

Now, with the assumption that I make on item i. try camera calibration of each camera independently to obtain the lens and tangential distortion parameters first. These you want to have them fixed when re-calibrating from motion. Using lines seems more viable as they might be robust against occlusions.

I can edit this answer in case you provide more details and enrich this discussion.

I would put this as comment, but I do not have enough credit.