21
votes

I am trying to use the new AVFoundation framework for taking still pictures with the iPhone.

With a button press this methos is called. I can hear the shutter sound but I can't see the log output. If I call this method several times the camera preview will freeze.

Is there any tutorial out there how to use captureStillImageAsynchronouslyFromConnection?

[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:
                [[self stillImageOutput].connections objectAtIndex:0]
                     completionHandler:^(CMSampleBufferRef imageDataSampleBuffer,
                            NSError *error) {
                                              NSLog(@"inside");
                            }];
- (void)initCapture {
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

    captureOutput.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 

    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = AVCaptureSessionPresetLow;

    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];

    self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];

    [self.prevLayer setOrientation:AVCaptureVideoOrientationLandscapeLeft];

    self.prevLayer.frame = CGRectMake(0.0, 0.0, 480.0, 320.0);
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    [self.view.layer addSublayer: self.prevLayer];


    // Setup the default file outputs
    AVCaptureStillImageOutput *_stillImageOutput = [[[AVCaptureStillImageOutput alloc] init] autorelease];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
                                    AVVideoCodecJPEG, AVVideoCodecKey,
                                    nil];
    [_stillImageOutput setOutputSettings:outputSettings];
    [outputSettings release];
    [self setStillImageOutput:_stillImageOutput];   

    if ([self.captureSession canAddOutput:stillImageOutput]) {
        [self.captureSession addOutput:stillImageOutput];
    }

    [self.captureSession commitConfiguration];
    [self.captureSession startRunning];

}
5
I don't know if this was true back in 2010, but as of late 2011, I have easily been using captureStillImageAsynchronouslyFromConnection at the same time as getting a video feed using AVCaptureVideoDataOutput's delegate, captureOutputand while using a previewLayer. I get a 5megapixel image from stillImage, an 852x640 for the video feed and previewLayer.mahboudz
The above comment was based on an iPhone4. On an iPhone 4s, I get an 8megapixel still, while getting 852x640 for the video feed and previewLayer.mahboudz
What's the difference between initWithSession and layerWithSession? The docs don't discuss when to use one vs the other. developer.apple.com/library/ios/#documentation/AVFoundation/…knite

5 Answers

16
votes

We had this problem when 4.0 was still in beta. I tried a fair bunch of things. Here goes:

  • AVCaptureStillImageOutput and AVCaptureVideoDataOutput do not appear to play nicely with each other. If the video output is running, the image output never seems to complete (until you pause the session by putting the phone to sleep; then you seem to get a single image out).
  • AVCaptureStillImageOutput only seems to work sensibly with AVCaptureSessionPresetPhoto; otherwise you effectively get JPEG-encoded video frames. Might as well use higher-quality BGRA frames (incidentally, the camera's native output appears to be BGRA; it doesn't appear to have the colour subsampling of 2vuy/420v).
  • The video (everything that isn't Photo) and Photo presets seem fundamentally different; you never get any video frames if the session is in photo mode (you don't get an error either). Maybe they changed this...
  • You can't seem to have two capture sessions (one with a video preset and a video output, one with Photo preset and an image output). They might have fixed this.
  • You can stop the session, change the preset to photo, start the session, take the photo, and when the photo completes, stop, change the preset back, and start again. This takes a while and the video preview layer stalls and looks terrible (it re-adjusts exposure levels). This also occasionally deadlocked in the beta (after calling -stopRunning, session.running was still YES).
  • You might be able to disable the AVCaptureConnection (it's supposed to work). I remember this deadlocking; they may have fixed this.

I ended up just capturing video frames. The "take picture" button simply sets a flag; in the video frame callback, if the flag is set, it returns the video frame instead of a UIImage*. This was sufficient for our image-processing needs — "take picture" exists largely so the user can get a negative response (and an option to submit a bug report); we don't actually want 2/3/5 megapixel images, since they take ages to process.

If video frames are not good enough (i.e. you want to capture viewfinder frames between high-res image captures), I'd first see whether they've fixed using multiple AVCapture sessions, since that's the only way you can set both presets.

It's probably worth filing a bug. I filed a bug around the launch of 4.0 GM; Apple asked me for some sample code, but by then I'd decided to use the video frame workaround and had a release to release.

Additionally, the "low" preset is very low-res (and results in a low-res, low-framerate video preview). I'd go for 640x480 if available, falling back to Medium if not.

62
votes

After a lot of trial and error, I worked out how to do this.

Hint: Apple's official docs are - simply - wrong. The code they give you doesn't actually work.

I wrote it up here with step-by-step instructions:

http://red-glasses.com/index.php/tutorials/ios4-take-photos-with-live-video-preview-using-avfoundation/

Lots of code on the link, but in summary:

-(void) viewDidAppear:(BOOL)animated
{
    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;

    CALayer *viewLayer = self.vImagePreview.layer;
    NSLog(@"viewLayer = %@", viewLayer);

    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];

    captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
    [self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];

stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];

[session addOutput:stillImageOutput];

    [session startRunning];
}

-(IBAction) captureNow
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections)
    {
        for (AVCaptureInputPort *port in [connection inputPorts])
        {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] )
            {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    NSLog(@"about to request a capture from: %@", stillImageOutput);
    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
    {
         CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
         if (exifAttachments)
         {
            // Do something with the attachments.
            NSLog(@"attachements: %@", exifAttachments);
         }
        else
            NSLog(@"no attachments");

        NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
        UIImage *image = [[UIImage alloc] initWithData:imageData];

        self.vImage.image = image;
     }];
}
6
votes

This has been a huge help - I was stuck in the weeds for quite a while trying to follow the AVCam example.

Here is a complete working project with my comments that explain what is happening. This illustrates how you can use the capture manager with multiple outputs. In this example there are two outputs.

The first is the still image output of the example above.

The second provides frame by frame access to the video coming out of the camera. You can add more code to do something interesting with the frames if you like. In this example I am just updating a frame counter on the screen from within the delegate callback.

https://github.com/tdsltm/iphoneStubs/tree/master/VideoCamCaptureExample-RedGlassesBlog/VideoCamCaptureExample-RedGlassesBlog

0
votes

You should use Adam's answer, but if you use Swift (like most of you probably do nowadays), here's a Swift 1.2 port of his code:

  1. Make sure you import ImageIO
  2. Add a property private var stillImageOutput: AVCaptureStillImageOutput!
  3. Instantiate stillImageOutput before captureSession.startRunning():

Like this:

stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
captureSession.addOutput(stillImageOutput)

Then use this code to capture an image:

private func captureImage() {
    var videoConnection: AVCaptureConnection?
    for connection in stillImageOutput.connections as! [AVCaptureConnection] {
        for port in connection.inputPorts {
            if port.mediaType == AVMediaTypeVideo {
                videoConnection = connection
                break
            }
        }
        if videoConnection != nil {
            break
        }
    }
    print("about to request a capture from: \(stillImageOutput)")
    stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) { (imageSampleBuffer: CMSampleBuffer!, error: NSError!) -> Void in
        let exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, nil)
        if let attachments = exifAttachments {
            // Do something with the attachments
            print("attachments: \(attachments)")
        } else {
            print("no attachments")
        }
        let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
        let image = UIImage(data: imageData)
        // Do something with the image
    }
}

This all assumes that you already have an AVCaptureSession setup and just need to take a still from it, as did I.