I noticed in the iOS documentation for AVAssetWriterInput
you can pass nil
for the outputSettings
dictionary to specify that the input data should not be re-encoded.
The settings used for encoding the media appended to the output. Pass nil to specify that appended samples should not be re-encoded.
I want to take advantage of this feature to pass in a stream of raw H.264 NALs, but I am having trouble adapting my raw byte streams into a CMSampleBuffer
that I can pass into AVAssetWriterInput's appendSampleBuffer
method. My stream of NALs contains only SPS/PPS/IDR/P NALs (1, 5, 7, 8). I haven't been able to find documentation or a conclusive answer on how to use pre-encoded H264 data with AVAssetWriter. The resulting video file is not able to be played.
How can I properly package the NAL units into CMSampleBuffers
? Do I need to use a start code prefix? A length prefix? Do I need to ensure I only put one NAL per CMSampleBuffer
? My end goal is to create an MP4 or MOV container with H264/AAC.
Here's the code I've been playing with:
-(void)addH264NAL:(NSData *)nal
{
dispatch_async(recordingQueue, ^{
//Adapting the raw NAL into a CMSampleBuffer
CMSampleBufferRef sampleBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
CMFormatDescriptionRef formatDescription = NULL;
CMItemCount numberOfSampleTimeEntries = 1;
CMItemCount numberOfSamples = 1;
CMVideoFormatDescriptionCreate(kCFAllocatorDefault, kCMVideoCodecType_H264, 480, 360, nil, &formatDescription);
OSStatus result = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, [nal length], kCFAllocatorDefault, NULL, 0, [nal length], kCMBlockBufferAssureMemoryNowFlag, &blockBuffer);
if(result != noErr)
{
NSLog(@"Error creating CMBlockBuffer");
return;
}
result = CMBlockBufferReplaceDataBytes([nal bytes], blockBuffer, 0, [nal length]);
if(result != noErr)
{
NSLog(@"Error filling CMBlockBuffer");
return;
}
const size_t sampleSizes = [nal length];
CMSampleTimingInfo timing = { 0 };
result = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, numberOfSamples, numberOfSampleTimeEntries, &timing, 1, &sampleSizes, &sampleBuffer);
if(result != noErr)
{
NSLog(@"Error creating CMSampleBuffer");
}
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
});
}
Note that I'm calling CMSampleBufferSetOutputPresentationTimeStamp
on the sample buffer inside of the writeSampleBuffer
method with what I think is a valid time before I'm actually trying to append it.
Any help is appreciated.
setOutputPresentationTimeStamp
to fill in a real timestamp. I now realize that I need to also fill in the other fields of CMSampleTimingInfo. I'm settingdecodeTimeStamp
tokCMTimeInvalid
andduration
toCMTimeMake(1, 30)
. I now get a seekable video container with a proper total time, but there's no video (testing in VLC). – bsirang