I have a projection matrix derived from the camera calibration in an augmented reality app and as long as the screen aspect ratio matches the camera image aspect ratio everything is fine. When the camera image doesn't match the screen edge-for-edge you'll get distortion in the tracking.
The problem scenarios:
- 1280x720 video on an iPad
- 640x480 video on an iPhone 5S.
The working scenarios:
- 640x480 video on an iPad
- 1280x720 video on an iPhone 5S.
Goal: I want to handle this screen / camera aspect ratio mismatch in a general way.
This problem exists because the view has normalized device coordinates in the aspect ratio of the screen (4:3 for iPad), whereas the projection matrix has an aspect ratio of the camera image (16:9 for 720p). The background image needs to match up to the projection matrix or the illusion of augmented reality fails so if I want to toggle between 'fit' and 'fill' I'll need to change projection matrix to match the image size.
Note: I'm looking to deal with this problem without an OpenGL specific solution. So I'm looking for a more general mathematical answer that involves manipulating the projection matrix.