5
votes

This is the first time I do the image processing. So I have a lot of questions: I have two pictures which are taken from different position, one from the left and the other one from the right like the picture below.[![enter image description here][1]][1]

Step 1: Read images by using imread function

  I1 = imread('DSC01063.jpg');

  I2 = imread('DSC01064.jpg');

Step 2: Using camera calibrator app in matlab to get the cameraParameters

  load cameraParams.mat 

Step 3: Remove Lens Distortion by using undistortImage function

  [I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same');

  [I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same');

Step 4: Detect feature points by using detectSURFFeatures function

  imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);

  imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);

Step 5: Extract feature descriptors by using extractFeatures function

  features1 = extractFeatures(rgb2gray(I1), imagePoints1);

  features2 = extractFeatures(rgb2gray(I2), imagePoints2);

Step 6: Match Features by using matchFeatures function

  indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1);

  matchedPoints1 = imagePoints1(indexPairs(:, 1));

  matchedPoints2 = imagePoints2(indexPairs(:, 2));

From there, how can I construct the 3D point cloud ??? In step 2, I used the checkerboard as in the picture attach to calibrate the camera[![enter image description here][2]][2]

The square size is 23 mm and from the cameraParams.mat I know the intrinsic matrix (or camera calibration matrix K) which has the form K=[alphax 0 x0; 0 alphay y0; 0 0 1].

I need to compute the Fundamental matrix F, Essential matrix E in order to calculate the camera matrices P1 and P2, right ???

After that when I have the camera matrices P1 and P2, I use the linear triangulation methods to estimate 3D point cloud. Is it the correct way??

I appreciate if you have any suggestion for me?

Thanks!

2
Sorry! I can not post the pictures. - TRI TRAN
If you set the OutputView parameter of undistortImage to same, then you do not have to care about the newOrigin, because it is [0 0]. - Dima
@TRITRAN , did you get your code to work with 2 images ? if so can you show me the full code please, i need it for my project, its the last part needed to complete it, thanks - Zame

2 Answers

1
votes

To triangulate the points you need the so called "camera matrices" and the points in 2D in each of the images (that you already have).

In Matlab you have the function triangulate, that does the job for you.

If you have calibrated the cameras, you shoudl have this information already. Anyways, you have here an example of how to create the "stereoParams" object needed for the triangulation.

0
votes

Yes, that is the correct way. Now that you have matched points, you can use estimateFundamentalMatrix to compute the fundamental matrix F. Then you get the essential matrix E by multiplying F by extrinsics. Be careful about the order of multiplication, because the intrinsic matrix in cameraParameters is transposed relative to what you see in most textbooks.

Now, you have to decompose E into a rotation and a translation, from which you can construct the camera matrix for the second camera using cameraMatrix. You also need the camera matrix for the first camera, for which the rotation would be a 3x3 identity matrix, and translation will be a 3-element 0 vector.

Edit: there is now a cameraPose function in MATLAB, which computes an up-to-scale relative pose ('R' and 't') given the Fundamental matrix and the camera parameters.