Background: I do video file demuxing, decode the video track, apply some changes to frames received, decode and mux them again.
The known issue doing this in Android are the number of vendor specify encoder / decoder color formats. Android 4.3 introduced surfaces to get device independent, but I found it hard to work with them as my frame changing routines require a Canvas to write to.
Since Android 5.0, the use of flexible YUV420 color formats is promising. Jointly with getOutputImage for decoding and getInputImage for encoding, Image objects can be used as format retrieved from a decoding MediaCodec. I got decoding working using getOutputImage and could visualize the result after RGB conversion. For encoding a YUV image and queuing it into a MediaCodec (encoder), there seems to be a missing link however:
After dequeuing an input buffer from MediaCodec
int inputBufferId = encoder.dequeueInputBuffer (5000);
I can get access to a proper image returned by
encoder.getInputImage (inputBufferId);
I fill in the image buffers - which is working too, but I do not see a way to queue the input buffer back into the codec for encoding... There is only a
encoder.queueInputBuffer (inputBufferId, position, size, presentationUs, 0);
method available, but nothing that matches an image. The size required for the call can be retrieved using
ByteBuffer byteBuffer = encoder.getInputBuffer (inputBufferId);
and
byteBuffer.remaining ();
But this seems to screw up the encoder when called in addition to getInputImage().
Another missing piece of documentation or just something I get wrong?
encoder.getInputImage
? I am working with aBitmap
which I used to draw withcanvas
, and have converted to NV21, but unsure how to separate the panes for pushing to the image buffer. – JCutting8