5
votes

I'm building an iOS app where re-encoding and trimming a video in the background is necessary.

I can not use iOS libraries (AVFoundation) since they rely on the GPU and no app can access the GPU if it's backgrounded.

Due to this issue I switched to FFMpeg and compiled it (alongside libx264) and integrated it on my iOS app.

To sum things up what I need is:

  1. Trim the video for the first 10 seconds
  2. re-scale the video

After a couple of weeks - and banging my head against the wall quite often - I managed to:

  • split the video container into streams (demuxing)
  • copy the audio stream into the output stream (no decoding or encoding)
  • decode the video stream, run the necessary filters per frame, encode each resulting frame and remux it to the output stream (I decode the h264, filter it, re-encode it back to h264)

If I were to run ffmpeg through the command line I would run it like this:

ffmpeg -i input.MOV -ss 0 -t 10  -vf scale=320:240 -c:v libx264 -preset ultrafast -c:a copy output.mkv

My concern is how to trim the video? Although I could count the number of video frames that I encode/decode and based on the FPS decide when to stop I cannot do the same with the audio since I'm only demuxing and remuxing it.

Ideally - before scaling the video - I would run a process to trim the video by copying the 10 seconds of each stream (video and audio) into a new video container.

How to I achieve this through the AV libraries?

1

1 Answers

-1
votes

I know you can do this with one call to ffmpeg:

ffmpeg -i input.MOV -filter_complex [0:v]trim=duration=10.0,scale=320:240[vid];[0:a]atrim=duration=10.0[aud] -map [vid] -map [aud] -c:v libx264 -preset ultrafast -c:a libvo_aacenc -b:a 128k -flags +aic+mv4 output.mkv