I am doing this simple thing:
- Vertical plane detection
- Image recognition on a vertical plane
The image is hanged on the detected plane (on my wall). In both case I implement the renderer:didAddNode:forAnchor:
function from ARSCNViewDelegate
. I stand at the place for the vertical plane detection and the image recognition.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let shipScene = SCNScene(named: "ship.scn"), let shipNode = shipScene.rootNode.childNode(withName: "ship", recursively: false) else { return }
shipNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
sceneView.scene.rootNode.addChildNode(shipNode)
print(anchor.transform)
}
In the case of a vertical plane detection the anchor
will be an ARPlaneAnchor
. In the case of an image recognition the anchor
will be an ARImageAnchor
.
Why are the transform matrices of those two anchors so different? I'm printing the anchor.transform
and I get those results:
1.
simd_float4x4([
[0.941312, 0.0, -0.337538, 0.0)],
[0.336284, -0.0861278, 0.937814, 0.0)],
[-0.0290714,-0.996284, -0.0810731, 0.0)],
[0.191099, 0.172432, -1.14543, 1.0)]
])
2.
simd_float4x4([
[0.361231, 0.10894, 0.926093, 0.0)],
[-0.919883, -0.121052, 0.373049, 0.0)],
[0.152743, -0.986651, 0.0564843, 0.0)],
[75.4418, 10.9618, -14.3788, 1.0)]
])
So if I want to place a 3D object on the detected vertical plane I can simply use [x = 0.191099, y = 0.172432, z = -1.14543]
as coordinates to set the position of my node (myNode
), and then add this node to the scene with sceneView.scene.rootNode.addChildNode(myNode)
but if I want to place a 3D object at the detected image's anchor, I cannot use [x = 75.4418, y = 10.9618, z = -14.3788]
.
What should I do to place a 3D object on the detected image's anchor? I really don't understand the transform matrix of the ARImageAnchor.