I am using the iPhone X and ARFaceKit
to capture the user's face. The goal is to texture the face mesh with the user's image.
I'm only looking at a single frame (an ARFrame
) from the AR
session.
From ARFaceGeometry
, I have a set of vertices that describe the face.
I make a jpeg representation of the current frame's capturedImage
.
I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to:
map the vertices from model space to world space;
map the vertices from world space to camera space;
divide by image dimensions to get pixel coordinates for the texture.
let geometry: ARFaceGeometry = contentUpdater.faceGeometry! let theCamera = session.currentFrame?.camera
let theFaceAnchor: SCNNode = contentUpdater.faceNode let anchorTransform = float4x4((theFaceAnchor?.transform)!)
for index in 0..<totalVertices { let vertex = geometry.vertices[index]
// Step 1: Model space to world space, using the anchor's transform let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0) let worldSpace = anchorTransform * vertex4 // Step 2: World space to camera space let world3 = float3(worldSpace.x, worldSpace.y, worldSpace.z) let projectedPt = theCamera?.projectPoint(world3, orientation: .landscapeRight, viewportSize: (theCamera?.imageResolution)!) // Step 3: Divide by image width/height to get pixel coordinates if (projectedPt != nil) { let vtx = projectedPt!.x / (theCamera?.imageResolution.width)! let vty = projectedPt!.y / (theCamera?.imageResolution.height)! textureVs += "vt \(vtx) \(vty)\n" }
}
This is not working, but instead gets me a very funky looking face! Where am I going wrong?