I'm trying to sample a depth texture into a compute shader and to copy it into an other texture.
The problem is that I don't get correct values when I read from the depth texture:
I've tried to check if the initial values of the depth texture were correct (with GDebugger), and they are. So it's the imageLoad GLSL function that retrieve wrong values.
This is my GLSL Compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
layout (binding=1, rgba8) writeonly uniform image2D colorBuffer;
// we use 16 * 16 threads groups
layout (local_size_x = 16, local_size_y = 16) in;
void main()
{
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
// Sampling from the depth texture
vec4 depthSample = imageLoad(depthBuffer, position);
// We linearize the depth value
float f = 1000.0;
float n = 0.1;
float z = (2 * n) / (f + n - depthSample.r * (f - n));
// even if i try to call memoryBarrier(), barrier() or memoryBarrierShared() here, i still have the same bug
// and finally, we try to create a grayscale image of the depth values
imageStore(colorBuffer, position, vec4(z, z, z, 1));
}
and this is how I'm creating the depth texture and the color texture:
// generate the deth texture
glGenTextures(1, &_depthTexture);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, wDimensions.x, wDimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// generate the color texture
glGenTextures(1, &_colorTexture);
glBindTexture(GL_TEXTURE_2D, _colorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wDimensions.x, wDimensions.y, 0, GL_RGBA, GL_FLOAT, NULL);
I fill the depth texture with depth values (bind it to a frame buffer and render the scene) and then I call my compute shader this way:
_computeShader.use();
// try to synchronize with the previous pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
// even if i call glFinish() here, the result is the same
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, _colorTexture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8);
glDispatchCompute((wDimensions.x + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE,
(wDimensions.y + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE, 1); // we divide the compute into groups of 16 threads
// try to synchronize with the next pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
with:
- wDimensions = size of the context (and of the framebuffer)
- WORK_GROUP_SIZE = 16
Do you have any idea of why I don't get valid depth values?
EDIT:
This is what the color texture looks like when I render a sphere:
and it seems that glClear(GL_DEPTH_BUFFER_BIT) doesn't do anything: Even if I call it just before the glDispatchCompute() I still have the same image... How can this be possible?
GL_DEPTH_COMPONENT16
is not a floating-point image format. It is fixed-point (e.g.GL_R16
). There is only 1 floating-point depth image format (well 2 if you count packed depth+stencil) and that is 32-bit:GL_DEPTH_COMPONENT32F
– Andon M. ColemanglMemoryBarrier(GL_ALL_BARRIER_BITS);
before and after the glDispactchCompute() but it doesn't change anything. I guess i didn't really understand how glMemoryBarrier works... – Paul