I am working on head pose estimation project using 2 cameras. For one camera system works and returns rotation matrix and translation vector of a head with respect to each camera coordinate system. I have rendered object in OpenGL scene which is rotated and translated to represent head movements. To display computed rotation matrix and translation vector I simply use the following OpenGL commands.
glMatrixMode(GL_MODELVIEW);
glLoadMatrixd(pose_matrix);
where pose matrix is OpenGL ModelView matrix constructed from rotation matrix and translation vector of a head.
Now I am trying to do this for 2 calibrated cameras. When the first camera lost the track of a face but the second one estimates head pose I display the rotation and translation with respect to second and visa versa. I want to display one OpenGl object and move it for both cases. For that I need to transfer pose matrices into the common coordinate frame.
I know the relative geometry of 2 cameras with respect to each other. I assume one of the cameras is the world coordinate frame and I transfer the head pose matrix of the second camera to the frame of first camera by multiplying pose matrix and calibration matrix of second camera with respect to first camera. When I load this multiplied matrix into the OpenGL ModelView matrix I get wrong results. When first camera captures face the object is moving right but for the second camera object is translated and rotated and is not in the same place as for the case of first camera.
What could be the problem? Maybe OpenGL displaying part is wrong or?