4
votes

I am quite new and experimenting with Apple's ARKit and have a question regarding rotation information of the ARCamera. I am capturing photos and saving the current position, orientation and rotation of the camera with each image taken. The idea is to create 2d plane nodes with these images and have them appear in another view in the same position/orientation/rotation (with respect to the origin) as when when they were captured (as if the images were frozen in the air when they were captured). The position information seems to work fine, but the orientation/rotation comes up completely off as I’m having a difficulty in understanding when it’s relevant to use self.sceneView.session.currentFrame?.camera.eulerAngles vs self.sceneView.pointOfView?.orientation vs self.sceneView.pointOfView?.rotation.

This is how I set up my 2d image planes:

let imagePlane = SCNPlane(width: self.sceneView.bounds.width/6000, height: self.sceneView.bounds.height/6000)
imagePlane.firstMaterial?.diffuse.contents = self.image//<-- UIImage here
imagePlane.firstMaterial?.lightingModel = .constant
self.planeNode = SCNNode(geometry: imagePlane)

Then I set the self.planeNode.eulerAngles.x to the value I get from the view where the image is being captured using self.sceneView.session.currentFrame?.camera.eulerAngles.xfor x (and do the same for y and z as well).

I then set the rotation of the node as self.planeNode.rotation.x = self.rotX(where self.rotX is the information I get from self.sceneView.pointOfView?.rotation.x).

I have also tried to set it as follows:

let xAngle = SCNMatrix4MakeRotation(Float(self.rotX), 1, 0, 0); 
let yAngle = SCNMatrix4MakeRotation(Float(self.rotY), 0, 1, 0);
let zAngle = SCNMatrix4MakeRotation(Float(self.rotZ), 0, 0, 1);
let rotationMatrix = SCNMatrix4Mult(SCNMatrix4Mult(xAngle, yAngle), zAngle);
self.planeNode.pivot = SCNMatrix4Mult(rotationMatrix, self.planeNode.transform);

The documentation states that eulerAngles is the “orientation” of the camera in roll, pitch and yaw values, but then what is self.sceneView.pointOfView?.orientation used for?

So when I specify the position, orientation and rotation of my plane nodes, is the information I get from eulerAngles enough to capture the correct orientation of the images?

Is my approach to this completely wrong or am I missing something obvious? Any help would be much appreciated!

1

1 Answers

2
votes

If what you want to do is essentially create a billboard that is facing the camera at the time of capture then you can basically take the transform matrix of the camera (it already has the correct orientation) and just apply an inverse translation to it to move it to the objects location. They use that matric to position your billboard. This way you don't have to deal with any of the angles or worry about the correct order to composite the rotations. The translation is easy to do because all you need to do is subtract the object's location from the camera's location. One of the ARkit WWDC sessions actually has an example that sort of does this (it creates billboards at the camera's location). The only change you need to make is to translate the billboard away from the camer's position.