1
votes

I am very new to camera calibration and I have been trying to work with the camera calibration app from MATLAB's computer vision toolbox.

So I followed the steps they suggested on the website and so far so good, I was able to obtain the intrinsic parameters of the camera.

So now, I am kind of confused about what should I do with the "cameraParameter" object that was created when the calibration was done.

So my questions are: (1) What should I do with the cameraParameter object that was created? (2) How do I use this object when I am using the camera to capture images of something? (3) Do I need the checker board around each time I capture images for my experiment? (4) Should the camera be placed at the same spot each time?

I am sorry if those questions are really beginner level, camera calibration is new to me and I was not able to find my answers.

Thank you so much for your help.

3

3 Answers

1
votes

I assume you are working with 1 just camera, so only intrinsic parameters of the camera are in the game.

(1),(2). Once your camera is calibrated, you need to use this parameters to undistort the image. Cameras dont take the images as they are in reality as the lenses distort it a bit, and the calibration parameters are for fixing the images. More in wikipedia.

About when you need to recalibrate the camera (3): if you set up the camera and don't change its focus, then you can use the same calibration parameters, but once you change the focal distance a recalibration is necessary.

(4) As long as you dont change the focal distance and you are not using a stereo camera sistem you can change your camera freely.

1
votes

What you are looking for are two separate calibration steps: Alignment of depth image to color image and conversion from depth to a point cloud. Both functions are provided by windows sdk. There are matlab wrappers that call these SDK functions. You may want to do your own calibration only if you are not satisfied with the manufacturer calibration information stored on Kinect. Usually the error is within 1-2 pixels in the 2D alignment, and 4mm in 3D.

0
votes

When you calibrate a single camera, you can use the resulting cameraParameters object for several things. First, you can remove the effects of lens distortion using the undistortImage function, which requires cameraParameters. There is also a function called extrinsics, which you can use to locate your calibrated camera in the world relative to some reference object (e. g. a checkerboard). Here's an example of how you can use a single camera to measure planar objects.

A depth sensor, like the Kinect, is a bit of different animal. It already gives you the depth in real units at each pixel. Additional calibration of the Kinect is useful if you want a more precise 3D reconstruction than what it gives you out of the box.

It would generally be helpful if you could tell us more about what it is you are trying to accomplish with your experiments.