2
votes

I'm trying to write a function that when given two cameras, their rotation, translation matrices, focal point, and the coordinates of a point for each camera, will be able to triangulate the point into 3D space. Basically, given all the extrinsic/intrinsic values needed

I'm familiar with the general idea: to somehow create two rays and find the closest point that satisfies the least squares problem, however, I don't know exactly how to translate the given information to a series of equations to the coordinate point in 3D.

2
I have done similar stuff and even though it might make a lot of sense to you but for those of us with rusty theory very little here makes any sense. You might want to add more informationDaveIdito

2 Answers

1
votes

I've arrived a couple years late on my journey. I ran into the exact same issue and found several people asking the same question but never found an answer that was simplified enough for me to understand, so I spent days learning this stuff just so I can simplify it to the essentials and post what I found here for future people.

I'll also give you some code samples at the end to do what you want in python, so stick around.

Some screen shots of my handwritten notes which explain the full process. Page 1. Page 2. Page 3.

This is the equation I start with can be found in https://docs.opencv.org/master/d9/d0c/group__calib3d.html

Starting formula

Once you choose an origin in the real world that is the same for both cameras, you will have two of these equations with the same X, Y, Z values.

Sorry this next part you already have but others might not have gotten this far:

First you need to calibrate your camera which will give you the camera matrix and distortions (intrinsic properties) for each camera. https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html

You only need those two and can dump the rvecs and tvecs because this will change when you set up the camera.

Once you choose your real world coordinate system, you can use cv2.solvePnP to get the rotation and translation vectors. To do this you need a set of real world points and their corresponding coordinates in the camera for each camera. My trick was I made some code that would show an image of the field and I would pass in points. Then I would click on locations on the image and create a mapping. The code for this bit is a bit lengthy so I won't share it here unless it is requested.

cv2.solvePnP will give you a vector for the rotation matrix, so you need to convert this to a 3x3 matrix using the following line:

`R, jac = cv2.Rodrigues(rvec)`

So now back to the original question: You have the 3x3 camera matrix for each camera. You have the 3x3 rotation matrix for each camera. You have the 3x1 translation vector for each camera. You have some (u, v) coordinate for where the object of interest is in each camera. The math is explained more in the image of the notes.

import numpy as np

def get_xyz(camera1_coords, camera1_M, camera1_R, camera1_T, camera2_coords, camera2_M, camera2_R, camera2_T):
    # Get the two key equations from camera1
    camera1_u, camera1_v = camera1_coords
    # Put the rotation and translation side by side and then multiply with camera matrix
    camera1_P = camera1_M.dot(np.column_stack((camera1_R,camera1_T)))
    # Get the two linearly independent equation referenced in the notes
    camera1_vect1 = camera1_v*camera1_P[2,:]-camera1_P[1,:]
    camera1_vect2 = camera1_P[0,:] - camera1_u*camera1_P[2,:]
    
    # Get the two key equations from camera2
    camera2_u, camera2_v = camera2_coords
    # Put the rotation and translation side by side and then multiply with camera matrix
    camera2_P = right_M.dot(np.column_stack((camera2_R,camera2_T)))
    # Get the two linearly independent equation referenced in the notes
    camera2_vect1 = camera2_v*camera2_P[2,:]-camera2_P[1,:]
    camera2_vect2 = camera2_P[0,:] - camera2_u*camera2_P[2,:]
    
    # Stack the 4 rows to create one 4x3 matrix
    full_matrix = np.row_stack((camera1_vect1, camera1_vect2, camera2_vect1, camera2_vect2))
    # The first three columns make up A and the last column is b
    A = full_matrix[:, :3]
    b = full_matrix[:, 3].reshape((4, 1))
    # Solve overdetermined system. Note b in the wikipedia article is -b here.
    # https://en.wikipedia.org/wiki/Overdetermined_system
    soln = np.linalg.inv(A.T.dot(A)).dot(A.T).dot(-b)
    return soln
0
votes

Assume you have two cameras -- camera 1 and camera 2.

For each camera j = 1, 2 you are given:

  1. The distance hj between it's center Oj, (is "focal point" the right term? Basically the point Oj from which the camera is looking at its screen) and the camera's screen. The camera's coordinate system is centered at Oj, the Oj--->x and Oj--->y axes are parallel to the screen, while the Oj--->z axis is perpendicular to the screen.

  2. The 3 x 3 rotation matrix Uj and the 3 x 1 translation vector Tj which transforms the Cartesian 3D coordinates with respect to the system of camera j (see point 1) to the world-coordinates, i.e. the coordinates with respect to a third coordinate system from which all points in the 3D world are described.

  3. On the screen of camera j, which is the plane parallel to the plane Oj-x-y and at a distance hj from the origin Oj, you have the 2D coordinates (let's say the x,y coordinates only) of point pj, where the two points p1 and p2 are in fact the projected images of the same point P, somewhere in 3D, onto the screens of camera 1 and 2 respectively. The projection is obtained by drawing the 3D line between point Oj and point P and defining point pj as the unique intersection point of this line with with the screen of camera j. The equation of the screen in camera j's 3D coordinate system is z = hj , so the coordinates of point pj with respect to the 3D coordinate system of camera j look like pj = (xj, yj, hj) and so the 2D screen coordinates are simply pj = (xj, yj) .

Input: You are given the 2D points p1 = (x1, y1), p2 = (x2, y2) , the twp cameras' focal distances h1, h2 , two 3 x 3 rotation matrices U1 and U2, two translation 3 x 1 vector columns T1 and T2 .

Output: The coordinates P = (x0, y0, z0) of point P in the world coordinate system.

One somewhat simple way to do this, avoiding homogeneous coordinates and projection matrices (which is fine too and more or less equivalent), is the following algorithm:

  1. Form Q1 = [x1; y1; h1] and Q2 = [x2; y2; h2] , where they are interpreted as 3 x 1 vector columns;

  2. Transform P1 = U1*Q1 + T1 and P2 = U1*Q2 + T1 , where * is matrix multiplication, here it is a 3 x 3 matrix multiplied by a 3 x 1 column, givin a 3 x 1 column;

  3. Form the lines X = T1 + t1*(P1 - T1) and X = T2 + t2*(P2 - T2) ;

  4. The two lines from the preceding step 3 either intersect at a common point, which is the point P or they are skew lines, i.e. they do not intersect but are not parallel (not coplanar).

  5. If the lines are skew lines, find the unique point X1 on the first line and the uniqe point X2 on the second line such that the vector X2 - X1 is perpendicular to both lines, i.e. X2 - X1 is perpendicular to both vectors P1 - T1 and P2 - T2. These two point X1 and X2 are the closest points on the two lines. Then point P = (X1 + X2)/2 can be taken as the midpoint of the segment X1 X2.

In general, the two lines should pass very close to each other, so the two points X1 and X2 should be very close to each other.