1
votes

I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.

After creating a compute shader I do the following

//Use the compute shader program
(*shaderPtr).useProgram();

//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);

//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);

The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.

bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
  glBindImageTexture(
    0,                          //Always bind to slot 0
    sourceBuffer,
    0,
    GL_FALSE,
    0,
    GL_READ_ONLY,               //Only read from this texture
    GL_RGB16F
  );

  glBindImageTexture(
    1,                          //Always bind to slot 1
    targetBuffer,
    0,
    GL_FALSE,
    0,
    GL_WRITE_ONLY,              //Only write to this texture
    GL_RGB16F
  );

  //this->height is currently 960
  glDispatchCompute(1, this->height, 1);            //Call upon shader 

  glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}

And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.

#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable

layout (rgba16, binding=0) uniform image2D sourceTex;           //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex;           //acquire texture and save changes made to texture

layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in;     //Local work-group size

void main(){

  vec4 result;     //Vec4 to store the value to be written

  pxlPos = ivec2(gl_GlobalInvocationID.xy);     //Get pxl-pos

  /*
  result = imageLoad(sourceTex, pxlPos);

  ...
  */

  imageStore(targetTex, pxlPos, vec4(1.0f));    //Write white to texture
}

Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either

A: The compute shader does not write to the texture

B: glDispatchCompute() is not run at all

However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.

End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.

---------------------------------------------------------------------

EDIT:

The textures I send in with sourceBuffer and targetBuffer is defined as following:

glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
  GL_TEXTURE_2D,
  0,
  GL_RGBA16F,       //Internal format
  this->width,
  this->height,
  0,
  GL_RGBA,      //Format read
  GL_FLOAT,     //Type of values in read format
  NULL          //source
);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
1
Could you add how sourceBuffer and targetBuffer are defined/initialized?BDL

1 Answers

1
votes

The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).

Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.

As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.