0
votes

The standard approach for validating the camera calibration is to compute the distance between the detected point and the corresponding world point reprojected in the image, this procedure validates both the intrinsic as extrinsic parameters

Now it is possible to verify the accuracy of the nonlinear distortion parameters by capturing images of lines and then undistorting the images and measuring if the lines are now straight lines.

Is there a way to verify the accuracy of the linear intrinsic parameters (optical center, focal point, skew) separate from the extrinsics?

1

1 Answers

1
votes

It's tricky, tending to very tricky if you require high levels of accuracy. The problem is that all intrinsic parameters are coupled in the reprojection error.

To give you an indea of the difficulties involved, consider the case of the principal point. It can be proven that the principal point of a pinhole camera is the baricenter of the triangle formed by three independent vanishing points. This would seem to suggest a procedure for verifying it independently of the other intrinsic parameters: take one or more images collectively showing three or more pencils of parallel lines, detect and model said pencil of lines, estimate their vanishing points, etc. However, to precisely model the detected lines, so you can intersect them to find the vanishing points, you need to accurately undistort the images - and guess what, the center of the nonlinear lens distortion is often approximated by the principal point, so your "verification" procedure ends up using exactly the same estimated parameter you are trying to independently verify.

You could try to work around the above difficulty by using an alternate non-parametric model of the nonlinear distortion - for example a thin-plate spline built off a grid using a cost function that only depends on deviation from linearity - as you suggest. Then again, it's tricky to come up with such a cost function that is unbiased: simple linear-least-squares fitting a straight line won't do, since the distorted images of the line points are in general not i.i.d. with respect to the underlying undistorted line. So you need to use a "local" model for each line, typically a low-order polynomial.

In the end, you are much better off just accepting that the parameters (both intrinsic and extrinsic) are coupled, and simply base your verification on the input-output needs of your actual application: determine what is an acceptable RMS reprojection error over the image area, then use independent sets of images of a known calibration object, one that somehow models the properties of the 3D scene that are important to your application, then reproject its points and verify that the errors you get are acceptable.