I want to capture every frame from a video to make some modification before rendering in Android device, such as Nexus 10. As I know, android uses hardware to decode and render the frame in the specific device, so I should get the frame data from GraphicBuffer, and before rendering the data will be YUV format.
Also I write a static method in AwesomePlayer.cpp to implement that capture frame data / modify the frame / write it back into GraphicBuffer to render.
Here is my demo code
static void handleFrame(MediaBuffer *buffer) {
sp<GraphicBuffer> buf = buffer->graphicBuffer();
size_t width = buf->getWidth();
size_t height = buf->getHeight();
size_t ySize = buffer->range_length();
size_t uvSize = width * height / 2;
uint8_t *yBuffer = (uint8_t *)malloc(ySize + 1);
uint8_t *uvBuffer = (uint8_t *)malloc(uvSize + 1);
memset(yBuffer, 0, ySize + 1);
memset(uvBuffer, 0, uvSize + 1);
int const *private_handle = buf->handle->data;
void *yAddr = NULL;
void *uvAddr = NULL;
buf->lock(GRALLOC_USAGE_SW_READ_OFTEN | GRALLOC_USAGE_SW_WRITE_OFTEN, &yAddr);
uvAddr = mmap(0, uvSize, PROT_READ | PROT_WRITE, MAP_SHARED, *(private_handle + 1));
if(yAddr != NULL && uvAddr != NULL) {
//memcpy data from graphic buffer
memcpy(yBuffer, yAddr, ySize);
memcpy(uvBuffer, uvAddr, uvSize);
//modify the YUV data
//memcpy data into graphic buffer
memcpy(yAddr, yBuffer, ySize);
memcpy(uvAddr, uvBuffer, uvSize);
}
munmap(uvAddr, uvSize);
buf->unlock();
free(yBuffer);
free(uvBuffer);
}
I printed the timestamp for memcpy function, and I realized that memcpy from GraphicBuffer takes much more time than memcpy data into GraphicBuffer. Take the video with resolution 1920x1080 for example, memcpy from GraphicBuffer takes about 30ms, it is unacceptable for normal video play.
I have no idea why it takes so much time, maybe it copies data from GPU buffer, but copy data into GraphicBuffer looks normal.
Could anyone else who is familiar with hardware decode in android take a look at this issue? Thanks very much.
Update: I found that I didn't have to use GraphicBuffer to get the YUV data, I just used hardware decode the video source and storage the YUV data to memory, so that I could get YUV data from memory directly, it's very fast. Actually you could found the similar solution in AOSP source code or open source video display app. I just allocate the memory buffers rather than graphic buffers, and then use the hardware decoder. Sample code in AOSP: frameworks/av/cmds/stagefright/SimplePlayer.cpp
link: https://github.com/xdtianyu/android-4.2_r1/tree/master/frameworks/av/cmds/stagefright