4
votes

From Wikipedia, camera resectioning is the process of finding the true parameters of the camera that produced a given photograph or video. Camera resectioning is also known as geometric camera calibration.

Currently I am using Camera Calibration Toolbox for Matlab for my camera calibration. The toolbox returns calibration parameters such as focal length, principle point, skew, and distortion. However, the issue with this method is that it requires an extra step in calibrating the camera by using a special calibration object like a checkerboard. Additionally, it only works for one focus of the camera.

How can I get the calibration parameters without manually calibrating? For example, how does Microsoft's Photosynth perform camera calibration on its images?

4

4 Answers

3
votes

You're looking for a body of research called self-calibration or auto-calibration. There are several papers (for free) and I'd recommend starting with this tutorial.

1
votes

Photosynth has the advantage that it has several images of the same scene and can track points of interest through them. It's likely that that is the main method they use for determining the locations where the photos have been taken as well as viewing angles and focal lengths. While you probably only get relative results to the other views most of them likely cluster in a single plane you then simply declare as ground.

By the way: The researchers who built this did make publications about it which are available online1: Photo Tourism, Modeling the World from Internet Photo Collections, Finding paths through the world's photos.


1 Provided you do have an ACM subscription but generally, you should have, at laest at work/uni/whatever.

0
votes

It also doesn't need to do anything like the sub-pixel level of correction you get from a checkerboard.
At best it has to simply rotated and shift overlapping images, even with poor images it only has to find a few edges to take out converging verticals.

0
votes

Perhaps the camera manufacturer could provide you with data. I don't know anything about Photosynth, but any "calibration" done without some object to calibrate against or known properties of the lenses and sensors and such would necessarily be based on suspect prior beliefs, no?

Edit: I see from other comments that Photosynth stitches photos together. So the prior beliefs include the knowledge that several photos are pictures of different aspects of the same scene. Its job then is not so much to calibrate the camera, but to reconcile the images themselves.