4
votes

Have you any ideas or recommendations for camera calibration when the number of samples is limited and within a small region of the image?

Here is some additional information:

I am working on a project to assist disabled people in using a computer with their eyes. There is something that is causing me a bit of trouble beacuse of my inexperience with OpenCV.

The camera is head mounted, the convexity is not bad but the eyeball itself is convex and moves rotationally. I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.

During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.

I can draw filled poligons connecting groups of four dots and create a chessboard pattern or I can save each eye position as a dot and use the symmetric circle pattern to calibrate.

The issue I have is that the camera is static and the eye position does not change, therefore I am limited as to how many images I can generate since the postion range is not that great.

I am thinking about subdividing the grid into smaller chessboard patterns but they will all be in the same small region so I am not sure how useful this will be.

Thanks!

1
Please specify your exact question more clearly/explicitly. Helps out get more answers. - Menelaos
Thanks, is now rewritten, I hope is more clear - Jorge

1 Answers

1
votes

What you're talking about doesn't actually seem to be camera calibration - it is the calibration of your eye tracking setup.

When you calibrate a camera in OpenCV, you do try to remove radial and tangential distortion, so it seems intuitive to apply that process to "flatten" a round object. However, the radial distortion introduced by a spherical lens is not what you're dealing with. You're concerned with the way that points on a spherical object are projected into your image.

Admittedly, the models will look very similar, but the point is that you shouldn't touch the calibration of your camera (which you should do offline) during the calibration of your setup to a test subject. The fact that your "position range" is limited is inherent to your problem, and can't be changed by image processing. The eye that you're filming only fills up so much of your camera's field of view.

Personally, I would just record the pupil position on 9 evenly distributed points on your screen and correlate the screen coordinates to the image coordinates of the pupil second order polynomial. This comes down to taking the first Taylor term of the spherical projection, which is probably good enough unless the eye movements are large. You can then test the predicted movements against a second calibration with 16 instead of 9 points.

I assume you could find a book on the topic for more info.