2
votes

I am building an application where user can upload video and others can watch them later. I am aiming for HLS streaming of the video on the client side, for which the video format should be .m3u8. I am using node fluent-FFmpeg module to do the processing, however, I have a huge doubt, that, how to ensure that all the .ts files (chunks) are also stored back in s3 bucket along with the m3u8 file after ffmpeg processed the mp4 file?

Because the ffmpeg command only takes the location of the m3u8 file? How handle it when I want the input and output location to be S3?

Any help will be greatly appreciated.

I am following the answer from this question Ffmpeg creating m3u8 from mp4, video file size , which is working absolutely fine in my local machine, how to achieve the same for s3?

1

1 Answers

3
votes

Do the conversion (mp4-in, m3u8-out) recognizing that the local FS snapshot onComplete is NOT what u want..

You dont want because the as-output .m3u8 has container-like references ( EXTINF: segments ) using LOCAL FS and relative paths to each child .ts )

like this:

 #EXTINF:4.0
   ./segment_0.ts     <<  relative path from .m3u8 output

you will need a postprocess that relocates , rerefs everything to a remote:

  1. syncs all the local .ts files in /tmp to your CDN on S3

  2. saves/maps new .ts-S3 URI for each .ts file to the old , local fsPath in /tmp

  3. updates the m3u8 file output by fluent to reference the CDN copy of each .ts segment that contains CDN / URI to .ts

#EXTINF:4.0
https://${s3Domain}/${s3Bucket}/180_250000/hls/segment_0.ts
#EXTINF:4.0
https://${s3Domain}/${s3Bucket}/180_250000/hls/segment_1.ts
#EXTINF:4.0
https://${s3Domain}/${s3Bucket}/180_250000/hls/segment_2.ts
#EXTINF:4.0
  1. syncs the updated .m3u8 to the CDN

When done, all the references from the local FS snapshot when fluent process completes are changed so they all work from a new cloud location.

That is the brute-force workaround

OR

you use a service like 'cloudfront' that does the dirty work for you.