3
votes

I am new to iPhone app development (but experienced developer in other languages) and I am making an ARKit application in which I need to track an image's position and display a rectangle around this image.

I could do it in C++ with OpenCV and make the appropriate "bridging" classes to be able to call this code from swift. Now, I need to get the images from ARKit and pass them to this function.

How do I suscribe a function which handles the ARFrames from the ARKit scene? I found that I could get some ARFrame from sceneView.session.currentFrame but I did not find how to make a function that would be called for each frame (or each time my function has ended and is ready to receive the next frame).

Also, I discovered the Vision Framework but it seems to only be able to track an element on which the user tapped. Is that right or is there a combination of predefined functions which could be used to this purpose?

1
Have a look at session(:didUpdate:) from the ARSessionDelegate protocol. I believe it meets your requirement of suscribe a function which handles the ARFrames from the ARKit sceneleandrodemarco
@leandrodemarco That worked, thanks. You can post it as an answer for future googlers.Arnaud Denoyelle

1 Answers

5
votes

You can get each captured frame by the camera with the session(:didUpdate:) method from the ARSessionDelegate protocol. Note that this method might be useful even if you don't use the provided ARFrame; for instance, I use it to "poll" if there's any virtual content added currently visible.

Regarding the tracking image question, I believe you could create an ARAnchor or SCNNode and position it over the image and ARKit would track it