In OpenCV, I am using a Charuco board, have calibrated the camera, and use SolvePnP to get rvec and tvec. (similar to the sample code). I am using a stationary board, with a camera on a circular rig which orbits the board. I'm a noob with this, so please bear with me if this is something simple I am missing.
I understand that I can use Rodrigues() to get the 3x3 rotation matrix for the board orientation from rvec, and that I can transform the tvec value into world coordinates using -R.t() * tvec (in c++).
However, as I understand it, this 3x3 rotation R gives the orientation of the board with respect to the camera, so it's not quite what need. I want the rotation of the camera itself, which is offset from R by (I think) the angle between tvec and the z axis in camera space. (because the camera is not always pointing at the board origin, but it is always pointing down the z axis in camera space). Is this correct?
How do I find the additional rotation offset and convert it to a 3x3 rotation matrix which I can combine with R to get the actual camera orientation?
Thanks!