8
votes

I am using OpenCV's triangulatePoints function to determine 3D coordinates of a point imaged by a stereo camera.

I am experiencing that this function gives me different distance to the same point depending on angle of camera to that point.

Here is a video: https://www.youtube.com/watch?v=FrYBhLJGiE4

In this video, we are tracking the 'X' mark. In the upper left corner info is displayed about the point that is being tracked. (Youtube dropped the quality, the video is normally much sharper. (2x1280) x 720)

In the video, left camera is the origin of 3D coordinate system and it's looking in positive Z direction. Left camera is undergoing some translation, but not nearly as much as the triangulatePoints function leads to believe. (More info is in the video description.)

Metric unit is mm, so the point is initially triangulated at ~1.94m distance from the left camera.

I am aware that insufficiently precise calibration can cause this behaviour. I have ran three independent calibrations using chessboard pattern. The resulting parameters vary too much for my taste. ( Approx +-10% for focal length estimation).

As you can see, the video is not highly distorted. Straight lines appear pretty straight everywhere. So the optimimum camera parameters must be close to the ones I am already using.

My question is, is there anything else that can cause this?

Can a convergence angle between the two stereo cameras can have this effect? Or wrong baseline length?

Of course, there is always a matter of errors in feature detection. Since I am using optical flow to track the 'X' mark, I get subpixel precision which can be mistaken by... I don't know... +-0.2 px?

I am using the Stereolabs ZED stereo camera. I am not accessing the video frames using directly OpenCV. Instead, I have to use the special SDK I acquired when purchasing the camera. It has occured to me that this SDK I am using might be doing some undistortion of its own.

So, now I wonder... If the SDK undistorts an image using incorrect distortion coefficients, can that create an image that is neither barrel-distorted nor pincushion-distorted but something different altogether?

2
Looks to me that the problem is in your camera calibration, as you say yourself. Also, the lens distoration is usually larger on the sides, so makes sense the geometric model is worse there. How many points have you taken to the calibration? have you tried different geometric models? Did you placed the chessboard in different angles and distances?Elad Joseph
Thank you for your response. I used the 13x13 chessboard pattern which is printed on a sticky paper and glued perfectly to a straight wooden board. I made 3 independent series of shots. Every serie contained 16-20 images. Chessboard is captured in the various parts of the image with not very much variation in angle towards the camera (the board was always pretty orthogonal to the line of sight).ancajic
Again, thank you for your thoughts. That helps me to focus my investigation. I will expand the question with another idea, though...ancajic
Your calibration sounds reasonable, but remember that the calibration is just fitting a function. If you give it only samples from the center it will fit only the center, and won't care about what's going on the sides. Also, try looking on the reprojection error of the geometry model, it will tell you if something is fishy there.Elad Joseph
What about the base length of your stereo system and the change in matched point disparity on rectified image pair? Did you perform stereo calibration (because it yields the baseline and angle between cameras automatically, and you were asking about wrong baseline length)Victor Proon

2 Answers

2
votes

The SDK provided with the ZED Camera performs undistortion and rectification of images. The geometry model is based on the same as openCV :

  • intrinsic parameters and distortion parameters for both Left and Right cameras.
  • extrinsic parameters for rotation/translation between Right and Left.

Through one of the tool of the ZED ( ZED Settings App), you can enter your own intrinsic matrix for Left/Right and distortion coeff, and Baseline/Convergence.

To get a precise 3D triangulation, you may need to adjust those parameters since they have a high impact on the disparity you will estimate before converting to depth.

OpenCV gives a good module to calibrate 3D cameras. It does : -Mono calibration (calibrateCamera) for Left and Right , followed by a stereo calibration (cv::StereoCalibrate()). It will output Intrinsic parameters (focale, optical center (very important)), and extrinsic (Baseline = T[0], Convergence = R[1] if R is a 3x1 matrix). the RMS (return value of stereoCalibrate()) is a good way to see if the calibration has been done correctly.

The important thing is that you need to do this calibration on raw images, not by using images provided with the ZED SDK. Since the ZED is a standard UVC Camera, you can use opencv to get the side by side raw images (cv::videoCapture with the correct device number) and extract Left and RIght native images.

You can then enter those calibration parameters in the tool. The ZED SDK will then perform the undistortion/rectification and provide the corrected images. The new camera matrix is provided in the getParameters(). You need to take those values when you triangulate, since images are corrected as if they were taken from this "ideal" camera.

hope this helps. /OB/

1
votes

There are 3 points I can think of and probably can help you.

  1. Probably the least important, but from your description you have separately calibrated the cameras and then the stereo system. Running an overall optimization should improve the reconstruction accuracy, as some "less accurate" parameters compensate for the other "less accurate" parameters.

  2. If the accuracy of reconstruction is important to you, you need to have a systematic approach to reducing it. Building an uncertainty model, thanks to the mathematical model, is easy and can write a few lines of code to build that for you. Say you want to see if the 3d point is 2 meters away, at a particular angle to the camera system, and you have a specific uncertainty on the 2d projections of the 3d point, it's easy to backproject the uncertainty to the 3d space around your 3d point. By adding uncertainty to the other parameters of the system then you can see which ones are more important and need to have lower uncertainty.

  3. This inaccuracy is inherent in the problem and the method you're using.

    • First if you model the uncertainty you will see the reconstructed 3d points further away from the center of cameras have a much higher uncertainty. The reason is that the angle <left-camera, 3d-point, right-camera> is narrower. I remember the MVG book had a good description of this with a figure.
    • Second, if you look at the implementation of triangulatePoints you see that the pseudo-inverse method is implemented using SVD to construct the 3d point. That can lead to many issues, which you probably remember from linear algebra.

Update:

But I consistently get larger distance near edges and several times the magnitude of the uncertainty caused by the angle.

That's the result of using pseudo-inverse, a numerical method. You can replace that with a geometrical method. One easy method is to back-project the 2d-projections to get 2 rays in 3d space. Then you want to find where the intersect, which doesn't happen due to the inaccuracies. Instead you want to find the point where the 2 rays have the least distance. Without considering the uncertainty you will consistently favor a point from the set of feasible solutions. That's why with pseudo inverse you don't see any fluctuation but a gross error.

Regarding the general optimization, yes, you can run an iterative LM optimization on all the parameters. This is the method used in applications like SLAM for autonomous vehicles where accuracy is very important. You can find some papers by googling bundle adjustment slam.