1
votes

What I want to do is, access the pixel data of a texture in the OpenGL Shader. After that, compair their Red-component so that I can get the coordinate of the pixel which has the maximum Red-component. I can do it with objective C, with CPU processing power. The code is shown below.

- (void)processNewPixelBuffer:(CVPixelBufferRef)pixelBuffer
{
    short maxR = 0;
    NSInteger x = -1, y = -1;

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);    
    height = CVPixelBufferGetHeight(pixelBuffer);
    width = CVPixelBufferGetWidth(pixelBuffer);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
    uint8_t *src_buff = CVPixelBufferGetBaseAddress(pixelBuffer);

    short** rawData = malloc(sizeof(short*) * height);
    for (int i = 0; i < height; i++){
        rawData[i] = malloc((sizeof(short) * width));
        for (int j = 0; j < width; j++)
            rawData[i][j] = (short)src_buff[(i + width * j) * 4];
    }

    for (int j = 0; j < height; j++)
    for (int i = 0; i < width; i++)
        if (rawData[i][j] >= maxR) {
            maxR = rawData[i][j];
            x = i;
            y = j;
        }

    free(rawData);
}

So, my question is, how to I use GPU to do this process? I can make the pixelBuffer as a texture in the OpenGL Shader.

Vertex Shader

attribute vec4 position;
attribute vec4 inputTextureCoordinate;

varying vec2 textureCoordinate;

void main()
{
    gl_Position = position;
    textureCoordinate = inputTextureCoordinate.xy;
}

Fragment Shader

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture; //The input sample, in RGBA format

void main()
{
    // Code here
}

How do I modify the shader so that I can find out the pixel that has the maximum Red-component? Then, I want to turn that pixel to red color and other pixel to white. Is that possible?

1
Welcome to Stack Overflow! If your question did not get an answer, the correct response is not to ask the same question again. The extra material should have been added to your original question; then flag it for a moderator to consider re-opening. - Nicol Bolas
This really isn't a good application for a fragment shader. Fragment shaders work best when performing simple operations on a very limited set of inputs. You can't read every texel of a texture every time the fragment shader runs. - Tim
@Tim You could use a simple reduction shader, which for each pixel queries four neighbouring texels, computes their maximum red and the coordinate of this maximum texel and puts these values out as the color. Then just repeat this with this output as input texture until you have your maxred and texel coordinates in a 1x1 framebuffer (or until you do it on a small image on CPU). But still I don't know if that would buy him anything. Nevertheless of course you don't read each texel in each fragment shader invocation. - Christian Rau

1 Answers

6
votes

What you could do is use classic reduction shader. You render a screen-sized quad to a texture/screen of half the dimensions of your input texture (best done using FBOs) and in the fragment shader you compute the maximum of a 2x2 texel block for each pixel, putting out the max value and its corresponding texture coordinates:

//no need for any textrue coords, we render screen-aligned anyway
uniform sampler2D inputImageTexture; //The input sample, in RGBA format
uniform vec2 invImageSize; // (1/width, 1/height)

void main()
{
    vec2 coord = (2.0 * floor(gl_FragCoord.xy) + 0.5) * invImageSize;
    float ll = texture2D(inputImageTexture, coord).r;
    float lr = texture2D(inputImageTexture, coord+vec2(invImageSize, 0.0)).r;
    float ul = texture2D(inputImageTexture, coord+vec2(0.0, invImageSize)).r;
    float ur = texture2D(inputImageTexture, coord+vec2(invImageSize, invImageSize)).r;

    vec4 color = vec4(ll, coord, 1.0);
    if(lr > color.r)
        color.xyz = vec3(lr, coord+vec2(invImageSize, 0.0));
    if(ul > color.r)
        color.xyz = vec3(ul, coord+vec2(0.0, invImageSize));
    if(ur > color.r)
        color.xyz = vec3(ur, coord+vec2(invImageSize, invImageSize));
    gl_FragColor = color;
}

Then you use this output texture in the next step as input texture and render into a texture of again half the size, until you are at a 1x1 texture (or rather a small texture you can process on the CPU). But of course in the second and all following passes you have to output the stored texture coordnate instead of the computed one.

vec2 coord = (2.0 * floor(gl_FragCoord.xy) + 0.5) * invImageSize;
vec4 ll = texture2D(inputImageTexture, coord);
vec4 lr = texture2D(inputImageTexture, coord+vec2(invImageSize, 0.0));
vec4 ul = texture2D(inputImageTexture, coord+vec2(0.0, invImageSize));
vec4 ur = texture2D(inputImageTexture, coord+vec2(invImageSize, invImageSize));

ll = (lr.r > ll.r) ? lr : ll;
ll = (ul.r > ll.r) ? ul : ll;
ll = (ur.r > ll.r) ? ur : ll;
gl_FragColor = ll;

When you finally have the texel coordinate with the maximum red part (as a normalized texture coordinate in [0,1]) you just need to draw a completely white screen/texture-sized quad and a single red point at this position. But I cannot promise you that this multi-pass algorithm (which is rather cumbersome compared to the simplicity of the task) will really buy you anything compared to the pure CPU solution. You cannot do it in just one pass magically reading all an image's pixels for each output pixel and decide its color, that's just not how fragment shaders are (or should be) used.

EDIT: And by the way. There is probably no need to copy all your data into an additional temporary buffer in your CPU solution, at least not one that even scatters it around wildy in memory by using a weird array of arrays for what should be a single memory block (or none when working on the input directly), not to speak of the mass memory allocations. So first fix your CPU solution before even thinking about moving anything onto GPU.