Using VNFaceObservation to get the bounding box and landmark information about the face, but have not been able to find where to get the pitch and yaw of the face from the observation.
Have also tried getting pitch and yaw metadata from a CIDetector, but running both the CIDetector and the Vision Framework at the same time is CPU intensive.
let metadataOutput = AVCaptureMetadataOutput()
let metaQueue = DispatchQueue(label: "MetaDataSession")
metadataOutput.setMetadataObjectsDelegate(self, queue: metaQueue)
if captureSession.canAddOutput(metadataOutput) {
captureSession.addOutput(metadataOutput)
} else {
print("Meta data output can not be added.")
}
let configurationOptions: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh as AnyObject, CIDetectorTracking : true as AnyObject, CIDetectorNumberOfAngles: 11 as AnyObject]
faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: configurationOptions)
Is there a way to use the VNFaceObservation data to find the pitch and yaw of the face?