0
votes

I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.

What coordinates should I give the texture2D function in my final fragment shader? I currently have a

smooth in vec4 light_vert_pos

set in my fragment shader that is defined in the vertex shader by the function

light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;

I would assume I could multiply my lighting by the value

texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)

but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?

1
Check that your depth texture contains the correct image.kvark

1 Answers

2
votes

You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.

In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.

The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.

Note that this is all during the depth pass: the generation of the depth values for the shadow map.

However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.

In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:

vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;

To compute the Z value, you need to know the near and far values you use for glDepthRange.

texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);

Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get

texCoords.z = ndc_space_values.z * 0.5 + 0.5;

Which looks familiar somehow.

Aside:

Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?