1
votes

I'm trying to implement standard Structure from Motion code but the triangulatePoints is resulting in NaN values. I do the following steps -

  1. get ORB features for image 1 and image 2.
  2. Match the features.
  3. get essential matrix from the matched features and camera intrinsic info available, using findEssentialMat()
  4. using recoverPose() to get pose of camera 2. Camera 1 is assumed to be at the origin.
  5. get projection matrices for camera1 and camera2.
  6. use triangulatePoints() to get 3D points.

I have tried the Matlab triangulate() function as well with the projection matrices for camera1 and camera2 as well as matched points obtained above. In case of Matlab I get good results. But using opencv in C++ (Opencv 3.3) I get NaN values.

1
Have you verified the results of steps 1,2,3,4,and 5? I mean, are you you are getting the correct data in each step of OpenCV? can you post the code used for the triangulate points? - api55
Yes for steps 1 and 2 I use drawMatches() to see the matched points and they look consistent. For steps 3,4,5 I feed in the calculated values to Matlab triangulate function and that gives me consistent results as well. I call triangulatePoints() as follows - vector<Point2f> src,dst; // matched points on image 1 and image 2 Mat point3D; triangulatePoints(cam1,cam2,src,dst,point3D); // cam1 and cam2 are 3x4 matrices - Ambika Verma

1 Answers

1
votes

I also faced problems with triangulatePoints from opencv. It was due to the wrong type of the input point arrays:

  • the points need to be of the shape: 2xN or two-channel matrix of size 1xN or Nx1
  • Must be of type float!

Apart of this the only other thing I could think of could be that maybe 3d points calculated are infinity and thus after converting to euclidean coordinates they are divided by zero -> NaN