I'm writing an iPhone App which uses AVFoundation to take a photo and crop it. The App is similar to a QR code reader: It uses a AVCaptureVideoPreviewLayer with an overlay. The overlay has a square. I want to crop the image so the cropped image is exactly what the user has places inside the square.
The preview layer has gravity AVLayerVideoGravityResizeAspectFill.
It looks like what the camera actually captures is not exactly what the user sees in the preview layer. This means that I need to move from the preview coordinate system to the captured image coordinate system so I can crop the image. For this I think that I need the following parameters: 1. ration between view size and captured image size. 2. information which tells which part of the captured image matches what is displayed in the preview layer.
Does anybody know how I can obtain this info, or if there is a different approach to crop the image.
(p.s. capturing a screenshot of the preview is not an option, as I understand it might resulting in the App being rejected).
Thank you in advance