0
votes

I've got a question about ARKit and ARCore.

I'm developing an App with Unreal Engine in combination with built in Augmented Reality. It's using ARCore for Android Phones and ARKit for iOS-Devices.

Now, what I have observed: I have an virtual world, I can move trough this pretty well. But now for example I lose the tracking ability (for example I'm looking at a white wall or I just put my hand in front of the camera) the Android-Devices loses tracking and orientation. Which means that the whole world is stuck. If I'm doing this with a iPhone I just loose the ability of tracking.

I found something like this which compares ARKit with ARCore.

For ARKit:

Motion Tracking: ARKit can unvaryingly and accurately track device’s positioning in reference with the real objects in the live frame that is captured by the camera using Visual Inertial Odometer (VIO). This allows the devices to capture motion sensor data, recording the real-time position of the device.

For ARCore:

Motion Tracking: ARCore tracks and interprets IMU (Inertial Measurement Unit) data unlike ARKit that goes with VIO. Quite differently it also measures the shape, built and features of the surrounding objects to detect and identify the right position and orientation of the Android device in use.

Source: https://www.itfirms.co/arkit-vs-arcore-how-they-compare-against-each-other/

In the description of the ARKit motion tracking there is nothing about orientation.

Can someone explain me in other, maybe easy to understand, words why ARKit don't loosing the orientation ability in this case?

Thanks in advance

1

1 Answers

3
votes

Technically and conceptually, both approaches are the same. Contrary to what the article says,

interprets IMU (Inertial Measurement Unit) data unlike ARKit that goes with VIO

ARKit interprets image features (Visual) and IMU data (Inertial) and fuses them to get the change in position over time (Odometry) -> Visual inertial Odometry

ARCore interprets image features and IMU data and fuses them to get the change in position over time -> Visual Inertial Odometry

So the approaches and underlying concepts are the same. What differs is the implementation. I think it is just a decision to be made what to do if the visual part of the tracking system fails. Apple seems to have decided to still use the IMU (which can still track the orientation) while Google decided to stop the whole tracking framework (Maybe you have noticed, that after you cover the ARCore camera, you have around 1 second in which the tracking reacts to orientation changes. It is only after a timeout, that the tracking stops completely)