2
votes

The Problem

I'm trying to use Blender to create synthetic image for use with OpenCV's pose estimation (specifically, OpenCV's findEssentialMat and recoverPose). However, I found that the rotation matrix R that OpenCV returns corrects for rotations along the camera's Y and Z axes, but not its X axis. I suspect this is because Blender and OpenCV have different camera models (see diagram), but I can't figure out how to correct for this. How would I take a rotation matrix made using OpenCV's camera model and apply it to Blender's camera model?

Blender vs. OpenCV Cameras

Additional Details

To test this, I rendered a scene from (0, 0, 30) using an identity camera rotation, then rotated the camera by 10 degrees along X, Y and Z. First, here's the identity rotation matrix (no rotation): The original scene (identity rotation).

Here's a 10-degree rotation around X: 10 degree rotation around X

And here's a 10-degree rotation around Y: 10 degree rotation around Y

Finally, here's a 10-degree rotation around Z: 10 degree rotation around Z

Applying the rotation matrix OpenCV returns (for the estimated transformation between the rotated image and the original) corrects for all of these rotations except around X, which looks like this: incorrect correction around X

It seems that instead of correctly rotating by -10 degrees around X, the rotation matrix rotates a further 10 degrees around X.

1

1 Answers

3
votes

Going from an OpenCV matrix to Blender could be accomplished by multiplying by another rotation matrix to compensate for the change in coordinates systems.

mat = [[1, 0, 0], 
       [0, -1, 0], 
       [0, 0, -1]]

You said the Y and Z components are already being compensated for so perhaps the negative of the matrix is what you need?