9
votes

I'm currently doing some experiments with RealityKit.

I've been looking at some sample code, and I'm a bit confused about the differences between ARAnchor and AnchorEntity, and when to use one over the other.

So far I know that:

  • Both are anchors that describes a position in the real world.
  • AnchorEntity can also have other Entity's as children, so you can add model objects directly to the anchor. You can't do this with ARAnchor, you have to add model objects "manually" to the rootNode, and use the position of the anchor the place it correctly.
  • In the documentation it says that ARKit uses the added ARAnchor to optimize the tracking in the area around the anchor. The documentation for AnchorEntity does not specify this.

Right now I add a AnchorEntity to the session as a "root node", since it's simpler to use, so that I can simply add models as children directly to this anchor. But then I also add a ARAnchor, located at the same position, to the scene's anchors, to enhance tracking around this point. Is this nececary?

Q: Anyone can help me clarify the differences, and use cases of these two?

1

1 Answers

15
votes

Updated: April 03, 2021.


ARAnchor class and AnchorEntity class were both made for the same divine purpose – to tether a 3D content to your real-world objects.

RealityKit AnchorEntity greatly extends the capabilities of ARKit ARAnchor. The most important difference between these two is that AnchorEntity automatically tracks a real world target, but ARAnchor needs renderer(...) or session(...) instance methods to accomplish this. Also take into consideration that the collection of ARAnchors is stored in the ARSession and the collection of AnchorEntities is stored in the Scene.

Here are hierarchical differences:

enter image description here

The main advantage of RealityKit is the ability to use different AnchorEntities at the same time, such as .plane, .image and .object.

In ARKit, as you know, you can run just one config in the current session: World, Image, or Object. There is an exception in ARKit, however - you can run two configs together - WorldTracking and FaceTracking (but first one has to be a driver, and the other one – driven).


Apple Developer documentation says:

In RealityKit framework you use an AnchorEntity instance as the root of an entity hierarchy, and add it to the anchors collection for a Scene instance. This enables ARKit to place the anchor entity, along with all of its hierarchical descendants, into the real world. In addition to the components the anchor entity inherits from the Entity class, the anchor entity also conforms to the HasAnchoring protocol, giving it an AnchoringComponent instance.

AnchorEntity has three building blocks:

  • Transform component (transformation matrix containing translate, rotate and scale)
  • Synchronization component (entity's synchronization data for multiuser experience)
  • Anchoring component (allows choose a type of anchor – world, body or image)


All entities have Synchronization component that helps organise collaborative sessions.

enter image description here


AnchorEntity has nine specific anchor types for nine different purposes:

  • ARAnchor
    • helps implement 10 ARKit anchors, including ARGeoAnchor and ARAppClipCodeAnchor
  • body
  • camera
  • face
  • image
  • object
  • plane
  • world
  • raycastResult


You can simultaneously use both classes ARAnchor and AnchorEntity in your app. Or you can use just AnchorEntity class because it's all-sufficient one.

For additional info about ARAnchor and AnchorEntity, please look at THIS POST.