2
votes

After a few days of getting my GLSL vertex shader to display the vertices correctly, I've now moved onto lighting! My understanding of openGL lighting/normals isn't great by any stretch of the imagination so bear with me. I'm unsure of what translations I need to apply to my normals to get them to display correctly. Here is the application code that sets up my lights:

    final float diffuseIntensity = 0.9f;
    final float ambientIntensity = 0.5f;

    final float position[] = { 0f, 0f, 25000f, 0f};
    gl.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, position, 0);
    final float diffuse[] = { 0, diffuseIntensity, 0, 1f};
    gl.glLightfv(GL.GL_LIGHT0, GL.GL_DIFFUSE, diffuse, 0);
    final float ambient[] = { ambientIntensity, ambientIntensity, ambientIntensity, 1f};
    gl.glLightfv(GL.GL_LIGHT0, GL.GL_AMBIENT, ambient, 0);

Pretty standard stuff so far. Now because of the requirements of the application, here is the (somewhat odd) vertex shader:

void main()
{   
// P is the camera matrix, model_X_matrices are the relative translation/rotation etc of the model currently being rendered.
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;

gl_Position = pos;

gl_TexCoord[0] = gl_MultiTexCoord0;

gl_FrontColor = gl_Color;           
}

It's my understanding that I need to transform the gl_Normal into world coordinates. For my shader, I believe this would be:

vec4 normal = modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * vec4(gl_Normal);

And then I would need to get the position of the light (which was already declared in the application code in world space). I think I would do this by:

vec3 light_position = gl_LightSource[0].position.xyz;

and then find the diffuse value of the light by find the dot product of the normal and the light position.

Furthermore, I think in the fragment shader I just need to multiply the color by this diffuse value and it should all work. I'm just really not sure how to transform the normal coordinates correctly. Is my assumption correct or am I totally off the ball?

EDIT: After reading that the normal matrix (gl_NormalMatrix) is just the 3x3 inverse of the gl_ModelView matrix, I'm guessing that a correct way to calculate the normal in world space is to multiply the gl_Normal by the inverse of modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix? Would I still need to multiply this by the P matrix or is that irrelevant for normal calculations?

2

2 Answers

1
votes

You should premultiply modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix and only pass that as a single uniform.

Normals are multiplied by the inverse transpose of the modelview matrix, the details are explained in the excellent article found here http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/

0
votes

Although I realise this is a year old question, since I was just strugling with this myself I thought I'd share my answer.

A vector in opengl is represented synonymously to a point (i.e. one coordinate per axis). What that means is that its direction is defined by the translation from the origin to those coordinates. When you translate a conceptual mathematical vector, you don't change it's direction. When you translate an opengl vector without translating it's origin, you do.

And that's what's wrong about translating the normal vectors by the modelview matrix (or whatever your custom matrix stack is) - generally speaking that will contain rotation and translation (and scaling in the question, but that's neither here nor there). By applying the translation you change the direction of the normals. Long story short: further away the verticies, closer the normals become to being parallel to the camera-vertex vector.

Ergo, rather than

vec4 normal = modelTransformMatrix * vec4(gl_Normal);

you actually want

vec3 normal = mat3(modelTransformMatrix) * gl_Normal;

Hence excluding the translation terms, but retaining any rotation, scale and shear.

As for using the camera matrix, that depends on what you want to do. The important thing is that all values in an equation are in the same coordinate space. That being said, multiplying by the camera projection will likely cause problems, since it's (probably) set up to project from 3d world coordinates relative to the camera into screen co-ordinates + depth. Generally you'd calculate lighting in world space - multiplying by the model translation but not the camera projection.