0
votes

I am working on adding basic point lighting to my LWJGL-based game engine. I am using the OpenGL fixed-function lights for position and color, but am using shaders to do the actual lighting computations. My problem is somewhere in the transformation from world-space to eye-space coordinates. My goal is that the light is at a fixed position relative to the world. I know that when you set the position of the light, it is transformed by the OpenGL matrix stack the same way geometry is. However, when I move the camera, the lighting changes. The function clearRenderer is called at the beginning of the render phase each frame (the GL_PROJECTION matrix is already set up):

public static void clearRenderer(CameraViewpoint viewpoint) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    setupCamera3D(viewpoint);
    Light.drawLights();
}

public static void setupCamera3D(CameraViewpoint viewpoint) {
    Angles a = viewpoint.cameraAngle();
    Vector3 pos = viewpoint.cameraPosition();

    rotateZ(-a.roll);
    rotateX(-a.pitch);
    rotateY(-a.yaw);

    translate(pos.negate());
}

Here is the drawLights function in the Light class which is called in the code above:

public static void drawLights() {
    for (int i = 0; i < numLights; i++) {
        lights[i].drawLight();
    }
}

public void drawLight() {
    FloatBuffer buff = BufferUtils.createFloatBuffer(4);
    pos.store(buff); // pos is the position of the light in world-space
    buff.put(1.0f);
    buff.flip();
    glLight(GL_LIGHT0 + index, GL_POSITION, buff);
}

The vertex shader:

uniform float time;

varying vec3 normal;

void main() {
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    normal = (gl_NormalMatrix * gl_Normal);
    gl_TexCoord[0] = gl_MultiTexCoord0;
}

And the fragment shader:

uniform float time;

uniform vec3 m_ambient;
uniform vec3 m_diffuse;
uniform vec3 m_specular;

uniform sampler2D tex;

varying vec3 normal;

void main() {

    vec4 lightPos = gl_LightSource[0].position;

    vec4 color = texture2D(tex, gl_TexCoord[0].st);
    vec3 light = m_ambient;
    float f = dot(normalize((lightPos - gl_FragCoord).xyz), normalize(normal));

    f = clamp(f, 0.0, 1.0);

    light += m_diffuse * f;
    color.a = 1.0;
    gl_FragColor = color * vec4(light, 1.0);
}

So clearly, one of the following three things is not getting transformed properly: vertices, normals, or light position. I know that vertices are transformed correctly, since geometry appears where it should. Is there something I'm doing wrong in the frag shader when I subtract the frag coordinates from the light's position? Or is the light position not properly translated into eye-space?

1

1 Answers

0
votes

Ok, fixed it. I was doing a few things wrong. Mainly, gl_FragCoord actually refers to a pixel position on the screen, which is not the same as eye space. So I just made a uniform variable to store the eye-space coordinate of the vertex. Also, light positions and normals aren't transformed by the GL_PROJECTION matrix, so I had to make sure not to transform my position variable by that.

Also, to set the position of the light, it's easier to just translate the GL_MODELVIEW matrix by the position and use a buffer containing 0, 0, 0, 1 as the position.