7
votes

I am working with camera2Basic now and trying to get each frame data to do some image processing. I am using camera2 API in Android5.0, everything is fine when only doing the camera preview and it is fluid. But the preview stuck when I use the ImageReader.OnImageAvailableListener callback to get each frame data, this cause a bad User Experience. The following is my related codes:

This is the setup for camera and ImageReader, I set the format of image is YUV_420_888

public<T> Size setUpCameraOutputs(CameraManager cameraManager,Class<T> kClass, int width, int height) {
    boolean flagSuccess = true;
    try {
        for (String cameraId : cameraManager.getCameraIdList()) {
            CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
            // choose the front or back camera
            if (FLAG_CAMERA.BACK_CAMERA == mChosenCamera &&        
                    CameraCharacteristics.LENS_FACING_BACK != characteristics.get(CameraCharacteristics.LENS_FACING)) {
                continue;
            }
            if (FLAG_CAMERA.FRONT_CAMERA == mChosenCamera &&  
                    CameraCharacteristics.LENS_FACING_FRONT != characteristics.get(CameraCharacteristics.LENS_FACING)) {
                continue;
            }
            StreamConfigurationMap map = characteristics.get(
                    CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);

            Size largestSize = Collections.max(
                    Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)),
                    new CompareSizesByArea());

            mImageReader = ImageReader.newInstance(largestSize.getWidth(), largestSize.getHeight(),
                    ImageFormat.YUV_420_888, 3);

            mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
            ...
            mCameraId = cameraId;
       }
    } catch (CameraAccessException e) {
        e.printStackTrace();
    } catch (NullPointerException e) {

    }
    ......
}

When the camera opened successfully, I Create a CameraCaptureSession for camera preview

private void createCameraPreviewSession() {
    if (null == mTexture) {
        return;
    }

    // We configure the size of default buffer to be the size of camera preview we want.
    mTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

    // This is the output Surface we need to start preview
    Surface surface = new Surface(mTexture);

    // We set up a CaptureRequest.Builder with the output Surface.
    try {
        mPreviewRequestBuilder =
                mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
        mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
        mPreviewRequestBuilder.addTarget(surface);

        // We create a CameraCaptureSession for camera preview
        mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                new CameraCaptureSession.StateCallback() {

                    @Override
                    public void onConfigured(CameraCaptureSession session) {
                        if (null == mCameraDevice) {
                            return;
                        }

                        // when the session is ready, we start displaying the preview
                        mCaptureSession = session;

                        // Finally, we start displaying the camera preview
                        mPreviewRequest = mPreviewRequestBuilder.build();
                        try {
                            mCaptureSession.setRepeatingRequest(mPreviewRequest,
                                    mCaptureCallback, mBackgroundHandler);
                        } catch (CameraAccessException e) {
                            e.printStackTrace();
                        }
                    }

                    @Override
                    public void onConfigureFailed(CameraCaptureSession session) {

                    }
                }, null);
    } catch (CameraAccessException e) {
        e.printStackTrace();
    }
}

The last is the ImageReader.OnImageAvailableListener callback

private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
            @Override
            public void onImageAvailable(ImageReader reader) {
                Log.d(TAG, "The onImageAvailable thread id: " + Thread.currentThread().getId());
                Image readImage = reader.acquireLatestImage();
                readImage.close();
            }
        };

Maybe I do the wrong setup but I try several times and it doesn't work. Maybe there is another way to get frame data rather than ImageReader but I don't know. Anybody knows how to get each frame data in realtime?

2
What sort of processing are you trying to do? I think if you want to save the image, use an ImageReader but if you want to do efficient real-time processing, you should send the data to the Surface buffer associated with an Allocation instead.rcsumner
@Sumner,I want to get each frame from camera and do face detection rather than saving the image. It seems that the solution you supply is good, can you give more detail?CJZ
...haha not really, sorry. I usually just save the images for my purposes. I know it involves using RenderScript, though. Look at the Allocation class documentation.rcsumner
However, also note that many devices offer face detection, though not all. If you have a specific device you are targeting, see if it has it built in via the camera2 API.rcsumner
@Sumner, I have the same problem but I try to check for a qr code in the real time, should I use MediaCodec? as a Surface buffer ?Gutyn

2 Answers

1
votes

I do not believe that Chen is correct. The image format has almost 0 effect on the speed on the devices I have tested. Instead, the problem seems to be with the image size. On an Xperia Z3 Compact with the image format YUV_420_888, I am offered a bunch of different options in the StreamConfigurationMap's getOutputSizes method:

[1600x1200, 1280x720, 960x720, 720x480, 640x480, 480x320, 320x240, 176x144]

For these respective sizes, the maximum fps I get when setting mImageReader.getSurface() as a target for the mPreviewRequestBuilder are:

[13, 18, 25, 28, 30, 30, 30, 30 ]

So one solution is to use a lower resolution to achieve the rate you want. For the curious... note that these timings do not seem to be affected by the line

    mPreviewRequestBuilder.addTarget(surface);
...
    mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),

I was worried that adding the surface on the screen might be adding overhead, but if I remove that first line and change the second to

    mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface()),

then I see the timings change by less than 1 fps. So it doesn't seem to matter whether you are also displaying the image on the screen.

I think there is simply some overhead in the camera2 API or ImageReader's framework that makes it impossible to get the full rate that the TextureView is clearly getting.

One of the most disappointing things of all is that, if you switch back to the deprecated Camera API, you can easily get 30 fps by setting up a PreviewCallback via the Camera.setPreviewCallbackWithBuffer method. With that method, I am able to get 30fps regardless of the resolution. Specifically, although it does not offer me 1600x1200 directly, it does offer 1920x1080, and even that is 30fps.

0
votes

I'm trying the same things, I think you may change the Format like

mImageReader = ImageReader.newInstance(largestSize.getWidth(),
                                       largestSize.getHeight(),
                                       ImageFormat.FLEX_RGB_888, 3);

Because using the YUV may cause CPU to compress the data and it may cost some time. RGB can be directly showed on the device. And detect face from image should put in other Thread you must know it.