I have a calibrated camera (intrinsic matrix and distortion coefficients) and I want to know the camera position knowing some 3d points and their corresponding points in the image (2d points).
I know that cv::solvePnP
could help me, and after reading this and this I understand that I the outputs of solvePnP rvec
and tvec
are the rotation and translation of the object in camera coordinate system.
So I need to find out the camera rotation/translation in the world coordinate system.
From the links above it seems that the code is straightforward, in python:
found,rvec,tvec = cv2.solvePnP(object_3d_points, object_2d_points, camera_matrix, dist_coefs)
rotM = cv2.Rodrigues(rvec)[0]
cameraPosition = -np.matrix(rotM).T * np.matrix(tvec)
I don't know python/numpy stuffs (I'm using C++) but this does not make a lot of sense to me:
- rvec, tvec output from solvePnP are 3x1 matrix, 3 element vectors
- cv2.Rodrigues(rvec) is a 3x3 matrix
- cv2.Rodrigues(rvec)[0] is a 3x1 matrix, 3 element vectors
- cameraPosition is a 3x1 * 1x3 matrix multiplication that is a.. 3x3 matrix. how can I use this in opengl with simple
glTranslatef
andglRotate
calls?