3
votes

The goal is to overlay an image on top of a video, but using AVVideoCompositionCoreAnimationTool pixelates the image.

The image dimensions are 640x1136. The video export dimensions are 320x568 (to mimic 5S device) so the image should scale down nicely. The image itself is sharp, but something during the export process causes pixelation.

Playing with renderScale for AVMutableVideoComposition did not help as AVAssetExportSession throws an exception if the value is anything but 1.0.

Setting contentsGravity for the layer holding the image seems to have no effect.

The goal is to let a user record video then draw on the video. (The image represents the user drawing.) Ultimately, the exported video should match what the user saw in the video preview and what the user drew, with the same quality and dimensions. This question helps with the overlay image pixelation.

Help?

    // Create main composition & its tracks
    let mainComposition = AVMutableComposition()
    let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
    let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))

    // Get source video & audio tracks
    let videoURL = NSURL(fileURLWithPath: videoURL)
    let videoAsset = AVURLAsset(URL: videoURL, options: nil)
    let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
    let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]

    // Add source tracks to composition
    do {
        try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
        try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange while exporting video: \(error)")
    }

    // Create video composition
    let videoComposition = AVMutableVideoComposition()
    print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))")

    // -- Set parent layer & set size equal to device bounds
    let parentLayer = CALayer()
    parentLayer.frame = CGRectMake(0, 0, view.bounds.width, view.bounds.height)
    parentLayer.backgroundColor = UIColor.redColor().CGColor
    parentLayer.contentsGravity = kCAGravityResizeAspectFill

    // -- Set composition equal to capture settings
    videoComposition.renderSize = CGSize(width: view.bounds.width, height: view.bounds.height)
    videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate))

    // -- Add instruction to  video composition object
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, compositionVideoTrack.asset!.duration)
    let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
    instruction.layerInstructions = [videoLayerInstruction]
    videoComposition.instructions = [instruction]

    // -- Create video layer
    let videoLayer = CALayer()
    videoLayer.frame = parentLayer.frame
    videoLayer.contentsGravity = kCAGravityResizeAspectFill

    // -- Create overlay layer
    let overlayLayer = CALayer()
    overlayLayer.frame = parentLayer.frame
    overlayLayer.contentsGravity = kCAGravityResizeAspectFill
    overlayLayer.contents = overlayImage!.CGImage
    overlayLayer.contentsScale = overlayImage!.scale

    // -- Add sublayers to parent layer
    parentLayer.addSublayer(videoLayer)
    parentLayer.addSublayer(overlayLayer)
    //overlayLayer.shouldRasterize = true

    // -- Set animation tool
    videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)

    // Create exporter
    let outputURL = getFilePath(getUniqueFilename(gMP4File))
    let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)!
    exporter.outputURL = NSURL(fileURLWithPath: outputURL)
    exporter.outputFileType = AVFileTypeMPEG4
    exporter.videoComposition = videoComposition
    exporter.shouldOptimizeForNetworkUse = true
1

1 Answers

4
votes

After conducting several tests with rasterizationScale and contentsScale, setting combining both helped the most, though lines are still not as sharp as the original.

Hopefully someone finds an answer on how to preserve the sharpness of the original image when merging with a video.

Note you probably also need shouldRasterize if using rasterizationScale.

These tests were conducted at device scale (e.g., 2.0 for 5S) and 2x device scale (e.g., 4.0 for 5S). Saw 2x device scale used elsewhere so decided to try it, even though it's effect is unclear.

contentsScale 2.0: straight lines were crisp, but circles contained artifacts.

contentsScale 4.0: straight lines were okay but not as crisp as 2.0, but circles contained fewer artifacts. overall a better image.

rasterizationScale 2.0: straight lines crips but rounded areas (e.g., like in the letter "R") were horrible

rasterizationScale 4.0: straight lines not as sharp but rounded areas better

rasterizationScale + contentsScale 2.0: best compromise, lines still not as sharp as original image