4
votes

So I've been trying to wrap my head around shadow mapping in OpenGL for the past week and it's not been going so great. So I was hoping somebody here could help me out. I've been learning about shadow mapping from LearnOpenGL's shadow mapping tutorial which is using forward rendering and from what I gather it seems like shadow mapping with deferred is a bit different. For starters here is my shadow map:

Shadow map picture

It is red for some reason but apparently that is okay according to a friend. But I still feel like I'm already doing something wrong. But assuming it isn't, the shadow map is rendered properly and arrives at the lighting pass of the deferred rendering without any problems. I also send a buffer containing all the fragments in light space which I have transformed using a light space transform matrix.

//Create LSMatrix
    GLfloat near_plane = 1.0f;
    GLfloat far_plane = 10.0f;
    glm::mat4 lightProjMat = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, near_plane, far_plane);
    glm::mat4 lightPerspective = glm::lookAt(glm::vec3(-2.0f, 6.0f, -1.0f),
                                                glm::vec3(0.0f, 0.0f, 0.0f),
                                                glm::vec3(0.0f, 1.0f, 0.0f));

    shadowBuf.lightSpaceMat = lightProjMat * lightPerspective;

I don't have a directional light right now, only two point lights. So using an orthographic projection and all that will give me an incorrect shadow. But a shadow nonetheless. Right now it doesn't matter where the shadow comes from, I just want to learn. Then I can worry about how to get it to match up with the light source.

And here it is being used in the GLSL vertex shader of the geometry pass. (LSMat)

void main()
{
    gl_Position = vec4(position, 1.0f);
    vs_out.uv = uv;
    vs_out.normal = normal;
    vs_out.FragPos = vec3(model * vec4(position, 1.0f));
    vs_out.FragPosLS = LSMat * vec4(vs_out.FragPos, 1.0f); //Fragment position in light space
}

Then in the fragment shader it is saved to a texture and shipped off to the lighting pass.

So now I have everything I need to do the shadow calculation in my lighting pass. (Hopefully at least)

Here is the part of my fragment shader in the light pass where I do the shadow calculation:

float shadowCalc()
{
    vec4 fragPosLS = texture(gFragPosLS, uvs); //Fetch the lightspace frag pos from the texture

    vec3 projCoords = fragPosLS.xyz / fragPosLS.w; //Manually do perspective divison
    projCoords = projCoords * 0.5 + 0.5; //Get the pos in [0,1] range

    float closestDepth = texture(shadowMap, projCoords.xy).r;
    float currentDepth = projCoords.z;

    float shadow = currentDepth > closestDepth ? 1.0 : 0.0;

    return shadow;
}

Finally I use this shadow value on my diffuse and specular values:

    float shadowValue = shadowCalc();
    diffuse = diffuse * (1.0 - shadowValue);
    specular = specular * (1.0 - shadowValue);

    return ((diffuse * vec4(gColor, 1.0f)) + specular + ambient);

Then I just output the resulting color like usual and the result is this:

Picture of the result

The awful textures are just a result of me not bothering to UV map things in Maya right now. But beyond that you can clearly see that it is all kinds of messed up. I've even tried disabling one of the lights to make sure that it isn't breaking because I have two lights doing the shadowCalc function but that isn't it. But I'm not 100% sure about that.

If anyone has any ideas on why this is happening then I am all ears. It has been confusing me for a week now and I can't seem to figure out why this is happening. I only know that the stripes on the ground vanish when I use a shadow bias but the huge shadow behind the pyramid is still there. Not only does it look bad, it is also pointing in the completely wrong direction if you compare it to the angle at which the shadow map is rendered in. So it has to be something with the coordinates. I read something about how deferred rendering requires you to do some things differently but I haven't managed to get a proper answer on that.

This is already quite the mammoth post but if I have forgotten to show anything that might clear things up then do tell.

1
Grimm Shado wmapping?genpfault
There's no discussion of how your position texture (which you should consider optimizing away in the future by using reconstruction) is encoded. You've scaled the texture coordinates used to sample the depth texture into [0.0, 1.0], but I don't know about your position buffer. Unless it is floating-point, the fact that your position is neither normalized nor guaranteed to be positive can be a really big problem.Andon M. Coleman
You are reading only the red value from the depth map, are you sure you are writing the value to red component properly on the depth pass? And you are reading the currentDepth from the z component, can you post the code for these so we can check the rest of the code. Otherwise, it seems fine to mecodetiger
Please also add code where you initialize your depth-map format. are you using a single channel buffer?Harish
@codetiger: A depth texture only has one component, given the behavior of GLSL 1.30+, you can swizzle any component you want and you'll get the same result.Andon M. Coleman

1 Answers

0
votes

Just a stab in the dark: I had the issue with the shadow mapping combined with deferred rendering which was looking pretty similar to what you've shown in the screen shot. In my case, I was reading Z values from the depth texture incorrectly. The values in depth texture created with

gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT16, 2048, 2048, 0, gl.DEPTH_COMPONENT, gl.UNSIGNED_SHORT, null);

were normalized to (-1;+1) so I had to modify them back as

texture(shadowMap, projCoords.xy).r - .5) * 2.

The visibility calculation is looking as:

float bias = .01;
float visibility = (texture(shadowMap, projCoords.xy).r - .5) * 2. < projCoords.z - bias ? 0.1: 1.0;