I'm writing a piece of code that generates a slide show video from multiple images and multiple videos on iOS devices. I was able to do so with a single video and multiple images, but I'm not be able to figure out how to enhance it to multiple videos.
Here is the sample video I was able to generate with one video and two images.
Here is the main routine, which prepares the exporter.
// Prepare the temporary location to store generated video
NSURL * urlAsset = [NSURL fileURLWithPath:[StoryMaker tempFilePath:@"mov"]];
// Prepare composition and _exporter
AVMutableComposition *composition = [AVMutableComposition composition];
AVAssetExportSession* exporter = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = urlAsset;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = [self _addVideo:composition time:timeVideo];
Here is the _addVideo:time: method, which creates the videoLayer.
-(AVVideoComposition*) _addVideo:(AVMutableComposition*)composition time:(CMTime)timeVideo {
AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.renderSize = _sizeVideo;
videoComposition.frameDuration = CMTimeMake(1,30); // 30fps
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,timeVideo) ofTrack:_baseVideoTrack atTime:kCMTimeZero error:nil];
// Prepare the parent layer
CALayer *parentLayer = [CALayer layer];
parentLayer.backgroundColor = [UIColor blackColor].CGColor;
parentLayer.frame = CGRectMake(0, 0, _sizeVideo.width, _sizeVideo.height);
// Prepare images parent layer
CALayer *imageParentLayer = [CALayer layer];
imageParentLayer.frame = CGRectMake(0, 0, _sizeVideo.width, _sizeVideo.height);
[parentLayer addSublayer:imageParentLayer];
// Specify the perspecrtive view
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0 / imageParentLayer.frame.size.height;
imageParentLayer.sublayerTransform = perspective;
// Animations
_beginTime = 1E-10;
_endTime = CMTimeGetSeconds(timeVideo);
CALayer* videoLayer = [self _addVideoLayer:imageParentLayer];
[self _addAnimations:imageParentLayer time:timeVideo];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
// Prepare the instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
{
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, timeVideo);
AVAssetTrack *videoTrack = [[composition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
[layerInstruction setTransform:_baseVideoTrack.preferredTransform atTime:kCMTimeZero];
instruction.layerInstructions = @[layerInstruction];
}
videoComposition.instructions = @[instruction];
return videoComposition;
}
_addAnimation:time: method adds image layers, and schedule the animations of all the layers including the _videoLayer.
Everything works fine so far.
I, however, can't figure out how to add the second video to this slide show.
The sample in the AVFoundation Programming Guide uses multiple video composition instructions (AVMutableVideoCompositionInstruction) to combine two videos, but it simply renders them into a single CALayer object, which is specified in videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:inLayer: method (of AVVideoCompositionCoreAnimationTool).
I want render two video tracks into two separate layers (layer1 and layer2), and animate them separately, just like I'm doing with the layers associated with images.