0
votes

I have implemented shadow maps in GLSL by rendering the view from a light into a depth texture, and then in a second pass compare these values when rendering my geometry from camera view.

In abbreviated code, the vertex shader of the second (main) render pass is:

...
gl_Position = camviewprojmat * position;
shadowcoord = lightviewprojmat * postion;
...

and fragment shader I lookup this shadowcoord texel in the shadow texture to see if the light sees the same thing (lit), or something closer (shadowed.) This is done by setting GL_TEXTURE_COMPARE_MODE to GL_COMPARE_REF_TO_TEXTURE for the depth texture.

This works great for lights that have an orthogonal projection. But once I use a perspective projection to create wide-angle spot lights, I encounter errors in the image.

I have determined the cause of my issues to be the incorrectly interpolated depth values shadowcoord.z / shadowcoord.w which, due to the perspective projection, are not linear. Yet, the interpolation over the triangle is linear.

At the vertex locations, the depth values are determined exactly, but the fragments between vertex locations get incorrectly interpolated values for depth.

This is demonstrated by the image below. The yellow crosshairs are the light position, which is a spot-light looking straight down. The colour-coding is the light-depth from -1 (red) to +1 (blue.)

depth interpolation

The pillar in the middle has long tall triangles from top to bottom, and all the interpolated light-depth values are off by a lot.

The stairs on the left have much more vertex locations, so it samples the non-linear depths more accurately.

The project matrix I use for the spot light is created like this (I use a very wide angle of 170 deg):

        // create a perspective projection matrix
        const float f = 1.0f / tanf(fov/2.0f);
        const float aspect = 1.0f;
        float* mout = sl_proj.data;

        mout[0] = f / aspect;
        mout[1] = 0.0f;
        mout[2] = 0.0f;
        mout[3] = 0.0f;

        mout[4] = 0.0f;
        mout[5] = f;
        mout[6] = 0.0f;
        mout[7] = 0.0f;

        mout[8] = 0.0f;
        mout[9] = 0.0f;
        mout[10] = (zFar+zNear) / (zNear-zFar);
        mout[11] = -1.0f;

        mout[12] = 0.0f;
        mout[13] = 0.0f;
        mout[14] = 2 * zFar * zNear /  (zNear-zFar);
        mout[15] = 0.0f;

How can I deal with this non-linearity in the light depth buffer? Is it possible to have perspective projection that has linear depth values? Should I compute my shadow coordinates differently? Can they be corrected after the fact?

Note: I did consider doing the projection in the fragment shader instead, but as I have many lights in the scene, doing all those matrix multiplications in the fragment shader would be too costly in computation.

1

1 Answers

0
votes

This stackoverflow answer describes how to do a linear depth buffer.

It entails writing out the depth (modelviewprojmat * position).z in the vertex shader, and then in the fragment shader compute the linear depth as:

gl_FragDepth = ( depth - zNear ) / ( zFar - zNear );

And with a linear depth buffer, the fragment interpolators can do their job properly.