I am new to iPhone app development (but experienced developer in other languages) and I am making an ARKit application in which I need to track an image's position and display a rectangle around this image.
I could do it in C++ with OpenCV and make the appropriate "bridging" classes to be able to call this code from swift. Now, I need to get the images from ARKit and pass them to this function.
How do I suscribe a function which handles the ARFrame
s from the ARKit scene? I found that I could get some ARFrame
from sceneView.session.currentFrame
but I did not find how to make a function that would be called for each frame (or each time my function has ended and is ready to receive the next frame).
Also, I discovered the Vision Framework but it seems to only be able to track an element on which the user tapped. Is that right or is there a combination of predefined functions which could be used to this purpose?
ARSessionDelegate
protocol. I believe it meets your requirement of suscribe a function which handles the ARFrames from the ARKit scene – leandrodemarco