2
votes

I'm using Android's Camera2 API and would like to perform some image processing on camera preview frames and then display the changes back on the preview (TextureView).

Starting from the common camera2video example, I've setup an ImageReader in my openCamera().

    mImageReader = ImageReader.newInstance(mVideoSize.getWidth(),
    mVideoSize.getHeight(), ImageFormat.YUV_420_888, mMaxBufferedImages);
    mImageReader.setOnImageAvailableListener(mImageAvailable, mBackgroundHandler);

In my startPreview(), I've setup the Surfaces to receive frames from the CaptureRequest.

    SurfaceTexture texture = mTextureView.getSurfaceTexture();
    assert texture != null;
    texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
    mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);

    List<Surface> surfaces = new ArrayList<>();

    // Here is where we connect the mPreviewSurface to the mTextureView.
    mPreviewSurface = new Surface(texture);
    surfaces.add(mPreviewSurface);
    mPreviewBuilder.addTarget(mPreviewSurface);

    // Connect our Image Reader to the Camera to get the preview frames.
    Surface readerSurface = mImageReader.getSurface();
    surfaces.add(readerSurface);
    mPreviewBuilder.addTarget(readerSurface);

Then I'll modify the image data in the OnImageAvailableListener() callback.

ImageReader.OnImageAvailableListener mImageAvailable = new ImageReader.OnImageAvailableListener() {
    @Override
    public void onImageAvailable(ImageReader reader) {
        try {
            Image image = reader.acquireLatestImage();
            if (image == null)
                return;

            final Image.Plane[] planes = image.getPlanes();

            // Do something to the pixels.
            // Black out part of the image.
            ByteBuffer y_data_buffer = planes[0].getBuffer();
            byte[] y_data = new byte[y_data_buffer.remaining()];
            y_data_buffer.get(y_data);
            byte y_value;
            for (int row = 0; row < image.getHeight() / 2; row++) {
                for (int col = 0; col < image.getWidth() / 2; col++) {
                    y_value = y_data[row * image.getWidth() + col];
                    y_value = 0;
                    y_data[row * image.getWidth() + col] = y_value;
                }
            }

            image.close();
        } catch (IllegalStateException e) {
            Log.d(TAG, "mImageAvailable() Too many images acquired");
        }
    }
};

As I understand it now I am sending images to 2 Surface instances, the one for mTextureView and the other for my ImageReader.

How can I get my mTextureView to use the same Surface as the ImageReader, or should I be manipulating the image data directly from the mTextureView's Surface?

Thanks

1

1 Answers

2
votes

If you only want to display the modified output, then I'm not sure why you have two outputs configured (the TextureView and the ImageReader).

Generally, if you want something like

camera -> in-app edits -> display

You have several options, depending on the kinds of edits you want, and various tradeoffs between ease of coding, performance, and so on.

One of the most efficient options is to do your edits as an OpenGL shader. In that case, a GLSurfaceView is probably the simplest option. Create a SurfaceTexture object with a texture ID that's unused in the GLSurfaceView's EGL context, and pass a Surface created from the SurfaceTexture to the camera session and requests. Then in the SurfaceView drawing method, call SurfaceTexture's updateTexImage() method, and then use the texture ID to render your output as you'd like it.

That does require a lot of OpenGL code, so if you're not familiar with it, that can be challenging.

You can also use RenderScript for a similar effect; there you'll have an output SurfaceView or TextureView, and then a RenderScript script that reads from an input Allocation from the Camera and writes to an output Allocation to the View; you can create such Allocations from a Surface. The Google HdrViewfinderDemo camera2 sample app uses this approach. It's a lot less boilerplate.

Third, you can just use an ImageReader like you're doing now, but you'll have to do a lot of conversion yourself to write it to the screen. The simplest (but slowest) option is to get a Canvas from a SurfaceView or a ImageView, and just write pixels to it one by one. Or you can do that via the ANativeWindow NDK, which is faster but requires writing JNI code and still requires you to do YUV->RGB conversions yourself (or use undocumented APIs to push YUV into the ANativeWindow and hope it works).