4
votes

I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.

I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.

I share corresponding shader code:

Lighting step vertex shader

in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
    gl_Position = vec4(inVertex, 0, 1.0);
    texCoord = inTexCoord;
}

Lighting step fragment shader

float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;

vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;

normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);

float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);

vec3 lightColor = lightAmbient;
vec3 diffuse =  lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);

gl_FragColor =  vec4(vec3(color * lightColor * attenuation), 1.0);

I send light attribues to shader as uniforms:

shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));

viewmatrix is camera matrix and modelmatrix is just identity here.

Why point lights are translating with camera not with models?

Any suggestions are welcome!

1
For vec3 halfVector = normalize(lightVector - position); to work the vector position must have unit length (or at least the same as lightVector) else you will not get the halfWay but a solution that is weighted by the vector's length.Nobody moving away from SE

1 Answers

4
votes

In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.

shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));

EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).