I'm trying to understand the "chunked" aspect of HTTP Live Streaming a static video file to an iOS device. Where does the chunking of the video file happen?
Edit: from reading HTTP LIve Streaming and a bit more of http://tools.ietf.org/html/draft-pantos-http-live-streaming-07 it sounds like the video file is split into .ts segments on the server. Or the m3u8 playlists can specify byte offsets into the file (apparently using EXT-X-BYTERANGE
).
Here's what I understand of this process after reading Apple's HLS description and http://tools.ietf.org/html/draft-pantos-http-live-streaming-07:
- A static file lives on my server. It has the proper audio/video encoding (H.264 and AAC).
- I'll pass an
m3u8
playlist to the media player (MPMoviePlayer
or similar) in my app. - The app will "reload the index" during media playback. In other words the app will request additional segments to play.
- each 10 second segment is in an MPEG Transport Stream container.
My understanding of this process is incomplete (and perhaps incorrect). Any additional info is much appreciated.