4
votes

I am currently trying to create a Unity project which uses both OpenCV and ARKit. I have OpenCV in order to perform some light-weight feature recognition I don't want to do through ARKit directly. I have both the ARKit app and the OpenCV app working separately; however, when used together, ARKit grabs the camera and I haven't yet figured out how to get the ARKit frame data to OpenCV for the feature recognition I have planned.

My current goal is to pipe the ARKit frame data using ARFrameUpdated method, with something like the below:

public void ARFrameUpdated(UnityARCamera camera)
{
    // Get the frame pixel buffer
    var cvPixBuf = camera.videoParams.cvPixelBufferPtr;

    // Somehow convert to BGRa and insert into OpenCV datastructure

    // Perform OpenCV related actions 
}

However, I am unsure how to convert camera.videoParams.cvPixelBufferPtr to something which I could use with OpenCV.

If anyone knows another approach which I could use to do this, that would also be appreciated.

1

1 Answers

0
votes

Try creating a new camera, add the UnityARVideo to it and set its culling mask to nothing and have it render to a RenderTexture. It will render an unaugumented camera feed to the texture.