2
votes

I am trying to implement shadows in a WebGL scene for the first time and as far as I know, the most straightforward way to do this is to use a shadow map, but I just can't find a tutorial explaining this concept, that isn't either about OpenGL or containing some library like three.js.

However, I've read, that I'll have to write two pairs of shaders. The first pair of shaders is used to build the shadow map, which somehow has to be stored in a framebuffer object. Then I would have to take the stored shadow map from the framebuffer and pass it to the second pair of shaders, that is used to draw the objects of my scene.

The shadow map must contain the information, what distance the light can travel from it's source before it hits an object, and with this information passed to the shader, which is used to draw an object, it would be possible to specify, which parts are lit by light from the light source and which parts are only lit by ambient light.

Well so much for theory, but I have no idea of how to get this working...

For experimenting, I've set up a pretty simple scene, rendering two spheres lit by a single point light source, which looks like this:

spheres lit by point light

Both spheres are created with their center at [0.0, 0.0, 0.0] and with radius 1.0.

The matrix for the bigger sphere is translated by [-2.0, 0.0, -2.0] and scaled by [2.0, 2.0, 2.0].

The matrix for the smaller sphere is translated by [2.0, 0.0, 2.0] and scaled by [0.5, 0.5, 0.5].

The position of the point light source is [4.0, 0.0, 4.0].

So, the smaller sphere is now just between the bigger sphere and the light source, and therefore there should be an area on the surface of the bigger sphere, that is not lit directly.

The two shaders I use for this scene look like this:

vertex shader

  attribute vec4 aPosition;
  attribute vec3 aNormal;

  uniform mat4 uProjectionMatrix;
  uniform mat4 uModelViewMatrix;
  uniform mat3 uNormalMatrix;

  varying vec4 vPosition;
  varying vec3 vTransformedNormal;

  void main ( ) {
    vTransformedNormal = uNormalMatrix * aNormal;
    vPosition = uModelViewMatrix * aPosition;
    gl_Position = uProjectionMatrix * vPosition;
  }

fragment shader

  precision highp float;

  uniform vec3 uLightPosition;

  varying vec4 vPosition;
  varying vec3 vTransformedNormal;

  void main ( ) {
    vec3 lightDirection = normalize(uLightPosition - vPosition.xyz);

    float diffuseLightWeighting = max(
        dot(normalize(vTransformedNormal), lightDirection), 0.0);

    vec3 lightWeighting = vec3(0.1, 0.1, 0.1) +
        vec3(0.8, 0.8, 0.8) * diffuseLightWeighting;

    gl_FragColor = vec4(vec3(1.0, 1.0, 1.0) * lightWeighting, 1.0);
  }

So, the first thing I'd have to do now would be to write another pair of shaders, and because they're not used to actually draw something, I can omit the attribute for the normals as well as the uniform for the normal matrix, and also there's no need for a projection matrix, right?

The second vertex shader could then look like that:

  attribute vec4 aPosition;

  uniform mat4 uModelViewMatrix;

  varying vec4 vPosition;

  void main ( ) {
    vPosition = uModelViewMatrix * aPosition;
    gl_Position = vPosition;
  }

But what about the fragment shader? And, anyway, how do I find out where the light from the point light source hits an object? I mean, that would generally require to pass in the vertex position data from all relevant objects at once, wouldn't it? (Though in this particular case it'd only be neccessary to pass in the vertex positions of the smaller sphere...)

Now, my question is, how do I go on from here? What has to be written into the fragment shader, how do I save the shadow map and how can I use it to calculate the shadow of the smaller sphere on the bigger sphere?

I fear that I maybe demand too much, but it would be great if someone could at least point me into the right direction.

2

2 Answers

5
votes

See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

Basically, as you found out, the shadow mapping is done with 2 passes. The first pass writes/renders to a depth texture from the point of view of the light source which is then used in the second pass to determine if the pixel is in shadow. The idea is basically that if the pixel is further from the light than the shadowmap depth/distance, then this means that the scene contains another object that is closer to the light/blocks the light. AKA, the current pixel is in the shadow.

The first pass shader is basically a standard vertex shader with a empty fragment shader that render to a depth texture. Depth texture is a pretty well supported extension at the moment.

In the second pass, you need to setup some way to do the depth comparison as mentioned above. Then basically you calculate a shadow value, usually 1 or 0 based on the result of the depth comparison and use that in the lighting calculations. For example, shadow value of 1 blocks diffuse light and specular light contributions.

As for how to change the render output destination, you need to use the FrameBuffer Objects.

The complete setup may look something like this:

// HANDLE VBOS

fbo_shadow.setAsDrawTarget();
gl.clear(gl.DEPTH_BUFFER_BIT);

shadow_depth_shader.use();
shadow_depth.setUnif("u_viewProjection", lightMatrix);

// DRAW EVERYTHING

fbo_lightPass.setAsDrawTarget();
gl.clear(gl.COLOR_BUFFER_BIT );

lighting_pass_shader.use();
lighting_pass_shader.setUnif("u_shadowMap", fbo_shadow.depthTexture);
lighting_pass_shader.setUnif("u_viewProjection", camera.getVPMatrix());
// other unifs ...

// DRAW EVERYTHING AGAIN

EDIT: added a sample shadowCalculation function for the fragment shader:

float shadowCalculation(vec4 fragPosLightSpace, sampler2D u_shadowMap, float bias){
   // perform perspective divide and map to [0,1] range
   vec3 projCoords = fragPosLightSpace.xyz/fragPosLightSpace.w;
   projCoords = projCoords * 0.5 + 0.5;
   float shadowDepth = texture2D(u_shadowMap, projCoords.xy).r;
   float depth = projCoords.z;
   float shadow = step(depth-bias,shadowDepth);
   return shadow;
}
0
votes

The following code is a strange type but very helpful.

<html>
<body>
    <canvas id="myCanvas" width="600" height="400">
    </canvas>
    <script src="https://www.drawing3d.de/bridge.js"></script>
    <script src="https://www.drawing3d.de/WebDrawing3d.js"></script>
    <script src="https://www.drawing3d.de/Classes.js"></script>
    <script>
        var canvas = document.getElementById("myCanvas");
        var WebDevice = new Device(canvas);
        WebDevice.Shadow = true;
        WebDevice.Paint = draw;
        WebDevice.Refresh();
        function draw() {
        WebDevice.drawBox(new xyz(-10,-10,-1),new xyz(20,20,1));
        WebDevice.drawSphere(new xyz(0, 0, 5), 5);
        }
    </script>
</body>
</html>