I am trying to implement shadows in a WebGL scene for the first time and as far as I know, the most straightforward way to do this is to use a shadow map, but I just can't find a tutorial explaining this concept, that isn't either about OpenGL or containing some library like three.js.
However, I've read, that I'll have to write two pairs of shaders. The first pair of shaders is used to build the shadow map, which somehow has to be stored in a framebuffer object. Then I would have to take the stored shadow map from the framebuffer and pass it to the second pair of shaders, that is used to draw the objects of my scene.
The shadow map must contain the information, what distance the light can travel from it's source before it hits an object, and with this information passed to the shader, which is used to draw an object, it would be possible to specify, which parts are lit by light from the light source and which parts are only lit by ambient light.
Well so much for theory, but I have no idea of how to get this working...
For experimenting, I've set up a pretty simple scene, rendering two spheres lit by a single point light source, which looks like this:
Both spheres are created with their center at [0.0, 0.0, 0.0]
and with radius 1.0
.
The matrix for the bigger sphere is translated by [-2.0, 0.0, -2.0]
and scaled by [2.0, 2.0, 2.0]
.
The matrix for the smaller sphere is translated by [2.0, 0.0, 2.0]
and scaled by [0.5, 0.5, 0.5]
.
The position of the point light source is [4.0, 0.0, 4.0]
.
So, the smaller sphere is now just between the bigger sphere and the light source, and therefore there should be an area on the surface of the bigger sphere, that is not lit directly.
The two shaders I use for this scene look like this:
vertex shader
attribute vec4 aPosition;
attribute vec3 aNormal;
uniform mat4 uProjectionMatrix;
uniform mat4 uModelViewMatrix;
uniform mat3 uNormalMatrix;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
void main ( ) {
vTransformedNormal = uNormalMatrix * aNormal;
vPosition = uModelViewMatrix * aPosition;
gl_Position = uProjectionMatrix * vPosition;
}
fragment shader
precision highp float;
uniform vec3 uLightPosition;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
void main ( ) {
vec3 lightDirection = normalize(uLightPosition - vPosition.xyz);
float diffuseLightWeighting = max(
dot(normalize(vTransformedNormal), lightDirection), 0.0);
vec3 lightWeighting = vec3(0.1, 0.1, 0.1) +
vec3(0.8, 0.8, 0.8) * diffuseLightWeighting;
gl_FragColor = vec4(vec3(1.0, 1.0, 1.0) * lightWeighting, 1.0);
}
So, the first thing I'd have to do now would be to write another pair of shaders, and because they're not used to actually draw something, I can omit the attribute for the normals as well as the uniform for the normal matrix, and also there's no need for a projection matrix, right?
The second vertex shader could then look like that:
attribute vec4 aPosition;
uniform mat4 uModelViewMatrix;
varying vec4 vPosition;
void main ( ) {
vPosition = uModelViewMatrix * aPosition;
gl_Position = vPosition;
}
But what about the fragment shader? And, anyway, how do I find out where the light from the point light source hits an object? I mean, that would generally require to pass in the vertex position data from all relevant objects at once, wouldn't it? (Though in this particular case it'd only be neccessary to pass in the vertex positions of the smaller sphere...)
Now, my question is, how do I go on from here? What has to be written into the fragment shader, how do I save the shadow map and how can I use it to calculate the shadow of the smaller sphere on the bigger sphere?
I fear that I maybe demand too much, but it would be great if someone could at least point me into the right direction.