2
votes

I am developing an Augmented Reality app using the Vuforia SDK. I am trying to use AVCaptureVideoPreviewLayer and SceneKit for application rendering instead of raw OpenGL calls provided by Vuforia sample code.

I got the AVCaptureVideoPreviewLayer and SceneView working without Vuforia, i.e. I managed to draw 3D scene on top of camera video background. The code is at: https://github.com/lge88/scenekit-test0/blob/master/scenekit-test0/GameViewController.swift#L74-L85:

func initViews() {
    let rootView = self.view

    let scnView = createSceneView()
    let scene = createScene()
    scnView.scene = scene

    let videoView = createVideoView()

    rootView.addSubview(videoView)
    rootView.addSubview(scnView)
}

The implementation can be summarized as:

  1. Create a UIView called videoView.
  2. Initialize an AVCaptureVideoPreviewLayer, and add it as a sublayer of videoView.
  3. Create a SCNView called scnView and initialize the scene o scnView.
  4. Add both videoView and scnView to the root UIView.

Currently I am trying to integrate Augmented Reality feature, GameViewController.swift#L68-L71:

initViews()
animateScene()
initControls()
ARServer(size:viewFrame.size, done: initARCallback)

ARServer is a class that takes care of the Vuforia initialization, its implementation is taken from Vuforia ImageTargets sample code. The tracker is working, it can successfully track the targets of the sample dataset.

However the AVCaptureVideoPreviewLayer rendering doesn't work correctly, the area of the video rendering area is resized, and the video layer is not updating, it shows a static image captured when the tracker camera started. Here is how it looks from a ipad screenshot: https://github.com/lge88/scenekit-test0/blob/master/video-preview-layer-issue.png

1

1 Answers

0
votes

This strategy could get ugly on you really fast. Better would be to render everything into one view with one OpenGL context. If Vuforia wants to do its own GL stuff, it can share that view/context, too.

Look at Apple's GLCameraRipple sample code for getting live camera imagery into GL, and SCNRenderer for making SceneKit render its content into an arbitrary OpenGL (ES) context.

Alternatively, if you just want to get camera imagery into a SceneKit view, remember you can assign any Core Animation layer to the contents of a material — this should work for AVCaptureVideoPreviewLayer, too.