9
votes

I'm attempting to use shaders to modify a texture that is bound to a framebuffer, but I'm confused as to how the shaders get the "original" input values.

I'm doing the following:

GLuint textureId = 0;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D, ...);

GLuint framebufferId = 0;
glGenFramebuffers(1, &framebufferId);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glBindTexture(GL_TEXTURE_2D, 0);

GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) { ... }

glUseProgram(programId);
const GLenum buffer = GL_COLOR_ATTACHMENT0;
glDrawBuffers(1, &buffer);

What would empty vertex and fragment shaders look like? As I'm not drawing primatives, how do I set gl_Position in the vertex shader? What about passing through the input colour as the output colour of the fragment shader?

Empty vertex shader:

#version 330

void main()
{
    gl_Position = ??;
}

Empty fragment shader:

#version 330

layout(location = 0) out vec4 out_colour;

void main()
{
    out_colour = ???;
}
1
"As I'm not drawing primatives" - You have to draw a primitive, otherwise, well, nothing will be drawn. What is it what you are actually trying to do? To achieve some output (e.g. as result from "modifying a texture") there has to be drawn something. Likewise your shader code doesn't match your question's tags. Don't spam unrelated tags if the question doesn't even work for them.Christian Rau
How is a question about OpenGL, fragment and vertex shaders not relevant to the tags I've used...? I was under the impression that you could render to an offscreen framebuffer, with an attached texture, then use shaders to modify the texture, then use glReadPixels to get the modified data back. This is what I'm trying to do.Mark Ingram
Because the shader code your posted won't work in OpenGL ES in any way.Christian Rau
The empty shader code is a placeholder, I don't know what should go in there, hence the question.Mark Ingram

1 Answers

16
votes

I was under the impression that you could render to an offscreen framebuffer, with an attached texture, then use shaders to modify the texture, then use glReadPixels to get the modified data back. This is what I'm trying to do.

Ah ok, so you want to feed a texture through a fragment shader to gain a new texture. First of all you have to keep in mind, that you cannot just modify a texture in-place, since you cannot read from the texture you're currently rendering to. You have to feed in the to be modified texture into the fragment shader as an ordinary texture and put out the result into the framebuffer as usual, which could be an FBO with a different texture attached, a renderbuffer (if you want to read it back to the CPU, anyway), or the default framebuffer. You don't need an FBO if you just want to transform one image into another one, only if you want the result to be written into an offscreen buffer or a texture.

Furthermore you still have to draw something in order for the rasterizer to generate actual fragments to invoke the fragment shader for. The usual way to do this is to just draw a screen-sized quad parallel to the viewing plane, in order to fill the complete viewport with fragments:

//initialization code
glGenVertexArrays(1, &quad_vao);
glBindVertexArray(quad_vao);

const GLfloat vertices[] = { 
    -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f };
glGenBuffers(1, &quad_vbo);
glBindBuffer(GL_ARRAY_BUFFER, quad_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(0);

glBindVertexArray(0);    
glDeleteBuffers(1, &quad_vbo);

...
//render code
glBindVertexArray(quad_vao);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

As vertex shader a simple pass-thru shader is enough since the vertex positions already are in clip space:

#version 330

layout(location = 0) in vec4 in_position;

void main()
{
    gl_Position = in_position;
}

In the fragment shader we take the texture as input. The texture coordinate is already given by the fragment's position on the screen, we just need to normalize it by dividing through the texture size (or maybe use a GL_TEXTURE_RECTANGLE and a corresponsing samplerRect to use the fragment coordinate directly):

#version 330

uniform sampler2D tex;
uniform vec2 tex_size;

layout(location = 0) out vec4 out_color;

void main()
{
    vec4 in_color = texture(tex, gl_FragCoord.xy / tex_size);
    out_color = //do whatever you want with in_color;
}

That's all, the modified texture is written to the framebuffer, no matter where that redirects or what you do with the framebuffer data afterwards.


EDIT: With OpenGL 4.3 and its compute shaders there is now a more direct way for such rather non-rasterization pure GPGPU tasks like image processing. You can just invoke a compute shader (which is more similar to other GPU computing frameworks, like CUDA or OpenCL, than the other OpenGL shaders) on a regular 2D domain and process a texture (using OpenGL 4.2's image load/store functionality) directly in-place. In this case all you need is the corresponding compute shader:

#version 430

layout(local_size_x=32,local_size_y=8) in; //or whatever fits hardware and shader

layout(binding = 0, rgba) uniform image2D img; //adjust format to the actual data

void main()
{
    const uint2 idx = gl_GlobalInvocationID.xy;
    vec4 color = imageLoad(img, idx);
    //do whatever you want with color
    imageStore(img, idx, color);
}

Then all you need to do is bind the texture to the corresponding image unit (0, as set in the shader) and invoke a compute shader over the 2-dimensional image domain:

//again use the format that fits the texture data
glBindImageTexture(0, textureId, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA8);
glUseProgram(compute_program);  //a program with a single GL_COMPUTE_SHADER
glDispatchCompute(texture_width, texture_height, 1);

And that's all, you don't need an FBO, you don't need any other shaders, you don't need to draw anything, just raw computation. But it has to be evaluated if this more direct approach also results in better performance. And likewise might you need to pay some attention to proper memory synchronization of the to be modified texture, especially when trying to read from it afterwards. But consult deeper materials on image load/store for further information.