Why does rendering a camera frame with OpenGL (and then save it) create jagged images as compared to saving a camera frame with ImageReader (OpenGL - B/W, ImageReader - Color)? Both are using the same dimensions.
Background/details to this question:
I'm building a non-blocking version of ImageReader using Camera2 and GLES (up to GLES31) for the purpose of extracting high resolution still camera frames while not interrupting the preview.
The setup is making use of a TextureView for preview and GLSurfaceView for capturing the frames off-screen. The GLSurfaceView has a simple renderer and shader that draws the GL_TEXTURE_EXTERNAL_OES onto a framebuffer from which the raw image data can be extracted.
On my test device the dimensions are set up like this: Camera sensor size is 3264x2448 (obtained via CameraCharacteristics)
GLSurfaceView has defaultBufferSize 3264x2448 and layout 3264x2448. Format is also set to JPEG with getHolder().setFormat(ImageFormat.JPEG) and JPEG quality is 100
Framebuffer is set to 3264x2448
This should in my mind result in a 1-to-1 transfer of the camera frame, but clearly that is not the case.
I have tried several other dimensions such as matching the screen size, picking appropriate dimensions from StreamConfigurationMap, custom dimensions, all result in jagged output. It's as if there is always some up or downsampling happening in the background (GL_TEXTURE_EXTERNAL_OES actually coming in at different dimensions?).
Any input on this topic is appreciated as I've been stuck on this for weeks at this point.
