I'm currently trying to use Android as a Skype endpoint. At this stage, I need to encode video into H.264 (since it's the only format supported by Skype) and encapsulate it with RTP in order to make the streaming work.
Apparently the MediaRecorder
is not very suited for this for various reasons. One is because it adds the MP4 or 3GP headers after it's finished. Another is because in order to reduce latency to a minimum, hardware accelaration may come in handy. That's why I would like to make use of the recent low-level additions to the framework, being MediaCodec
, MediaExtractor
, etc.
At the moment, I plan on working as follows. The camera writes its video into a buffer. The MediaCodec encodes the video with H264 and writes the result to another buffer. This buffer is read by an RTP-encapsulator, which sends the stream data to the server. Here's my first question: does this plan sounds feasible to you?
Now I'm already stuck with step one. Since all documentation on the internet about using the camera makes use of MediaRecorder
, I cannot find a way to store its raw data into a buffer before encoding. Is addCallbackBuffer suited for this? Anyone has a link with an example?
Next, I cannot find a lot of documentation about MediaCodec (since it's fairly new). Anyone who has a solid tutorial?
Lastly: any recommendations on RTP libraries?
Thanks a lot in advance!