1
votes

I am trying to encode raw frames in nv12 format. Frame rate is 15. I am using avcodec to encode. My capture device has a callback function which is activated when a raw viewfinder frame is available. I am copying the raw viewfinder frame and making a AVFrame from the data. Then I supply the frame to avcodec_encode_video as described in the api sample but somehow I am not getting expected result. I am using posix thread. I keep the raw frames on a buffer. Then my encoder thread collects data from the buffer and encodes it. The speed of encoding is too slow (h264 and mpeg1 -tested). Is it a problem with my threading or something else? I am at loss. The output is mysterious. The whole encoding process is a single function and single threaded but I find a bunch of frame encoded at a time. How exactly does the encoder function?Here is the code snippet for encoding:

while(cameraRunning)
{
    pthread_mutex_lock(&lock_encoder);
    if(vr->buffer->getTotalData()>0)
    {
        i++;
        fprintf(stderr,"Encoding %d\n",i);
        AVFrame *picture;
        int y = 0,x;
        picture = avcodec_alloc_frame();
        av_image_alloc(picture->data, picture->linesize,c->width, c->height,c->pix_fmt, 1);
        uint8_t* buf_source = new uint8_t[vr->width*vr->height*3/2];
        uint8_t* data = vr->buffer->Read(vr->width*vr->height*3/2);
        memcpy(buf_source,data,vr->width*vr->height*3/2);
        //free(&vr->buffer->Buffer[vr->buffer->getRead()-1][0]);
        /*for (y = 0; y < vr->height*vr->width; y++)
        {
            picture->data[0][(y/vr->width) * picture->linesize[0] + (y%vr->width)] = buf_source[(y/vr->width)+(y%vr->width)]x + y + i * 7;
            if(y<vr->height*vr->width/4)
            {
                picture->data[1][(y/vr->width) * picture->linesize[1] + (y%vr->width)] = buf_source[vr->width*vr->height + 2 * ((y/vr->width)+(y%vr->width))]128 + y + i * 2;
                picture->data[2][(y/vr->width) * picture->linesize[2] + (y%vr->width)] = buf_source[vr->width*vr->height +  2 * ((y/vr->width)+(y%vr->width)) + 1]64 + x + i * 5;
            }
        }

*/

        for(y=0;y<c->height;y++) {
                    for(x=0;x<c->width;x++) {
                        picture->data[0][y * picture->linesize[0] + x] = x + y + i * 7;
                    }
                }

                /* Cb and Cr */
                for(y=0;y<c->height/2;y++) {
                    for(x=0;x<c->width/2;x++) {
                        picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
                        picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
                    }
                }
        free(buf_source);
        fprintf(stderr,"Data ready\n");

        outbuf_size = 100000 + c->width*c->height*3/2;
        outbuf = (uint8_t*)malloc(outbuf_size);
        fprintf(stderr,"Preparation done!!!\n");
        out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
        had_output |= out_size;
        printf("encoding frame %3d (size=%5d)\n", i, out_size);
        fwrite(outbuf, 1, out_size, f);
        av_free(picture->data[0]);
        av_free(picture);

    }
    pthread_mutex_unlock(&lock_encoder);
}
1
It's possible the encoder buffers up frames before outputting them...I assume the colors are getting through OK? - rogerdpack

1 Answers

0
votes

You can use sws_scale in libswscale for colorspace conversion. First create an SwsContext specifying the source (NV12) and destination (YUV420P) using sws_getContext.

m_pSwsCtx = sws_getContext(picture_width, 
               picture_height, 
               PIX_FMT_NV12, 
               picture_width, 
               picture_height,
               PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

And then when you want to do the conversion each frame,

sws_scale(m_pSwsCtx, frameData, frameLineSize, 0, frameHeight,
    outFrameData, outFrameLineSize);