0
votes

I am trying to implement real time camera app with AVFoundation, GLKit and Core Image(not using GPUImage)

So, I found this tutorial
http://altitudelabs.com/blog/real-time-filter/
It was written in Objective-C, so I rewrote that code in Swift4.0, XCode9

It seemed working fine but sometimes(rarely), it crashed with following error. When GLKView's display method was called

EXC_BAD_ACCESS(code=1, addresss+0x********)

The time of crash, GLKView exists(not nil), EAGLContext does, CIContext does. my code is following


import UIKit
import AVFoundation
import GLKit
import OpenGLES

class ViewController: UIViewController {

    var videoDevice : AVCaptureDevice!
    var captureSession : AVCaptureSession!
    var captureSessionQueue : DispatchQueue!
    var videoPreviewView: GLKView!
    var ciContext: CIContext!
    var eaglContext: EAGLContext!
    var videoPreviewViewBounds: CGRect = CGRect.zero

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        // remove the view's background color; this allows us not to use the opaque property (self.view.opaque = NO) since we remove the background color drawing altogether
        self.view.backgroundColor = UIColor.clear

        // setup the GLKView for video/image preview
        let window : UIView = UIApplication.shared.delegate!.window!!
        eaglContext = EAGLContext(api: .openGLES2)
        videoPreviewView = GLKView(frame: videoPreviewViewBounds, context: eaglContext)
        videoPreviewView.enableSetNeedsDisplay = false

        // because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
        videoPreviewView.transform = CGAffineTransform(rotationAngle: CGFloat.pi/2.0)
        videoPreviewView.frame = window.bounds

        // we make our video preview view a subview of the window, and send it to the back; this makes ViewController's view (and its UI elements) on top of the video preview, and also makes video preview unaffected by device rotation
        window.addSubview(videoPreviewView)
        window.sendSubview(toBack: videoPreviewView)

        // bind the frame buffer to get the frame buffer width and height;
        // the bounds used by CIContext when drawing to a GLKView are in pixels (not points),
        // hence the need to read from the frame buffer's width and height;
        // in addition, since we will be accessing the bounds in another queue (_captureSessionQueue),
        // we want to obtain this piece of information so that we won't be
        // accessing _videoPreviewView's properties from another thread/queue
        videoPreviewView.bindDrawable()
        videoPreviewViewBounds = CGRect.zero
        videoPreviewViewBounds.size.width = CGFloat(videoPreviewView.drawableWidth)
        videoPreviewViewBounds.size.height = CGFloat(videoPreviewView.drawableHeight)

        // create the CIContext instance, note that this must be done after _videoPreviewView is properly set up
        ciContext = CIContext(eaglContext: eaglContext, options: [kCIContextWorkingColorSpace: NSNull()])

        if AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices.count > 0 {
            start()
        } else {
            print("No device with AVMediaTypeVideo")
        }
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }

    func start() {
        let videoDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

        videoDevice = videoDevices.first

        var videoDeviceInput : AVCaptureInput!
        do {
            videoDeviceInput =  try AVCaptureDeviceInput(device: videoDevice)
        } catch let error {
            print("Unable to obtain video device input, error: \(error)")
            return
        }

        let preset = AVCaptureSession.Preset.high
        captureSession = AVCaptureSession()
        captureSession.sessionPreset = preset

        // core image watns bgra pixel format
        let outputSetting = [String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType_32BGRA]
        // crate and configure video data output
        let videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.videoSettings = outputSetting

        // create the dispatch queue for handling capture session delegate method calls
        captureSessionQueue = DispatchQueue(label: "capture_session_queue")
        videoDataOutput.setSampleBufferDelegate(self, queue: captureSessionQueue)
        videoDataOutput.alwaysDiscardsLateVideoFrames = true

        captureSession.beginConfiguration()
        if !captureSession.canAddOutput(videoDataOutput) {
            print("Cannot add video data output")
            captureSession = nil
            return
        }

        captureSession.addInput(videoDeviceInput)
        captureSession.addOutput(videoDataOutput)

        captureSession.commitConfiguration()

        captureSession.startRunning()
    }

}

extension ViewController : AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        let imageBuffer : CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let sourceImage = CIImage(cvImageBuffer: imageBuffer, options: nil)
        let sourceExtent = sourceImage.extent

        let vignetteFilter = CIFilter(name: "CIVignetteEffect", withInputParameters: nil)
        vignetteFilter?.setValue(sourceImage, forKey: kCIInputImageKey)
        vignetteFilter?.setValue(CIVector(x: sourceExtent.size.width/2.0, y: sourceExtent.size.height/2.0), forKey: kCIInputCenterKey)
        vignetteFilter?.setValue(sourceExtent.width/2.0, forKey: kCIInputRadiusKey)
        let filteredImage = vignetteFilter?.outputImage

        let sourceAspect = sourceExtent.width/sourceExtent.height
        let previewAspect = videoPreviewViewBounds.width/videoPreviewViewBounds.height

        // we want to maintain the aspect radio of the screen size, so we clip the video image
        var drawRect = sourceExtent
        if sourceAspect > previewAspect {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0
            drawRect.size.width = drawRect.size.height * previewAspect
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }

        videoPreviewView.bindDrawable()

        if eaglContext != EAGLContext.current() {
            EAGLContext.setCurrent(eaglContext)
        }
        print("current thread \(Thread.current)")
        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT));

        // set the blend mode to "source over" so that CI will use that
        glEnable(GLenum(GL_BLEND));
        glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA));

        if let filteredImage = filteredImage {
            ciContext.draw(filteredImage, in: videoPreviewViewBounds, from: drawRect)
        }

        videoPreviewView.display()
    }
}

And the stack when crashing is


* thread #5, queue = 'com.apple.avfoundation.videodataoutput.bufferqueue', stop reason = EXC_BAD_ACCESS (code=1, address=0x8000000000000000)
frame #0: 0x00000001a496f098 AGXGLDriver`___lldb_unnamed_symbol149$$AGXGLDriver + 332
frame #1: 0x00000001923c029c OpenGLES`-[EAGLContext getParameter:to:] + 80
frame #2: 0x000000010038bca4 libglInterpose.dylib`__clang_call_terminate + 1976832
frame #3: 0x00000001001ab75c libglInterpose.dylib`__clang_call_terminate + 9400
frame #4: 0x000000010038b8b4 libglInterpose.dylib`__clang_call_terminate + 1975824
frame #5: 0x00000001001af098 libglInterpose.dylib`__clang_call_terminate + 24052
frame #6: 0x00000001001abe5c libglInterpose.dylib`__clang_call_terminate + 11192
frame #7: 0x000000010038f9dc libglInterpose.dylib`__clang_call_terminate + 1992504
frame #8: 0x000000010038d5b8 libglInterpose.dylib`__clang_call_terminate + 1983252
frame #9: 0x000000019a1e2a20 GLKit`-[GLKView _display:] + 308
* frame #10: 0x0000000100065e78 RealTimeCameraPractice`ViewController.captureOutput(output=0x0000000174034820, sampleBuffer=0x0000000119e25e70, connection=0x0000000174008850, self=0x0000000119d032d0) at ViewController.swift:160
frame #11: 0x00000001000662dc RealTimeCameraPractice`@objc ViewController.captureOutput(_:didOutput:from:) at ViewController.swift:0
frame #12: 0x00000001977ec310 AVFoundation`-[AVCaptureVideoDataOutput _handleRemoteQueueOperation:] + 308
frame #13: 0x00000001977ec14c AVFoundation`__47-[AVCaptureVideoDataOutput _updateRemoteQueue:]_block_invoke + 100
frame #14: 0x00000001926bdf38 CoreMedia`__FigRemoteOperationReceiverCreateMessageReceiver_block_invoke + 260
frame #15: 0x00000001926dce9c CoreMedia`__FigRemoteQueueReceiverSetHandler_block_invoke.2 + 224
frame #16: 0x000000010111da10 libdispatch.dylib`_dispatch_client_callout + 16
frame #17: 0x0000000101129a84 libdispatch.dylib`_dispatch_continuation_pop + 552
frame #18: 0x00000001011381f8 libdispatch.dylib`_dispatch_source_latch_and_call + 204
frame #19: 0x000000010111fa60 libdispatch.dylib`_dispatch_source_invoke + 828
frame #20: 0x000000010112b128 libdispatch.dylib`_dispatch_queue_serial_drain + 692
frame #21: 0x0000000101121634 libdispatch.dylib`_dispatch_queue_invoke + 852
frame #22: 0x000000010112b128 libdispatch.dylib`_dispatch_queue_serial_drain + 692
frame #23: 0x0000000101121634 libdispatch.dylib`_dispatch_queue_invoke + 852
frame #24: 0x000000010112c358 libdispatch.dylib`_dispatch_root_queue_drain_deferred_item + 276
frame #25: 0x000000010113457c libdispatch.dylib`_dispatch_kevent_worker_thread + 764
frame #26: 0x000000018ee56fbc libsystem_pthread.dylib`_pthread_wqthread + 772
frame #27: 0x000000018ee56cac libsystem_pthread.dylib`start_wqthread + 4

My project is in github
https://github.com/hegrecom/iOS-RealTimeCameraPractice

1
Knee-jerk queries: is captureOutput definitely serialised? You don't inadvertently try to bind the same GL context on two threads at once? It's been a long time since I used AVFoundation, apologies for the dumb question. Also, can you tell us anything else about how this problem is triggered? If it's while your app is backgrounding or backgrounded then it wouldn't be surprising since GPU usage is prohibited during that time and you don't seem to be doing anything to test for that.Tommy
@Tommy Yes, capture output is handed over to captureSessionQueue = DispatchQueue(label: "capture_session_queue") this queue. And this queue is serial queue. No, gl context is accessed only in that queue. I wish I knew some trigger to make problem.. but, It just happens randomly right after my app is launched. it's in foreground, definitely not in background.TKang

1 Answers

3
votes

Here solution: iOS 11 beta 4 presentRenderbuffer crash

Goto Manage Scheme->Options->GPU Frame capture->Disabled