2
votes

I'm currently implementing a software renderer that emulates OpenGL as a learning experience based on these lessons. My code for the project can be found here.

I'm having quite some difficulty dealing with vertex normals. I'd like to transform them with the model matrix, which I'm aware I should be using the inverse transpose of the model matrix when the matrix is not orthogonal. The light direction should be specified in world space, thus the normals should be transformed, then the dot product of the world space light direction used to calculate the light intensity.

This is the problem though. It works fine here, note the camera is rotated 45 degrees in the up axis looking at the model.

Front view

If I rotate the model 90 degrees in any axis, take the up axis for now, the light direction flips to point the other way. As you can see here the light is coming from the back.

enter image description here

If I rotate to 180 degrees, it's fine again.

enter image description here

If I rotate to 45 degrees the light points at 90 degrees as is shown here. Note the spikes to see where the light is coming from.

enter image description here

This has puzzled me for hours. I cannot figure out what's wrong. It's as though rotations are being doubled up on the light. The light's vector isn't being changed though, look here:

vec4 SmoothShader::Vertex(int iFace, int nthVert)
{
    vec3 vposition = model->vert(iFace, nthVert);
    vec4 projectionSpace = MVP * embed<4>(vposition);

    vec3 light = vec3(0, 0, 1);

    mat4 normTrans = M.invert_transpose();
    vec4 normal =  normTrans * embed<4>(model->normal(iFace, nthVert), 0.f);
    vec3 norm = proj<3>(normal);
    intensity[nthVert] = std::max(0.0f, norm.normalise() * light.normalise());

    return projectionSpace;
}

bool SmoothShader::Fragment(vec3 barycentric, vec3 &Colour)
{
    float pixelIntensity = intensity * barycentric;
    Colour = vec3(255, 122, 122) * pixelIntensity;
    return true;
}

The MVP (model, view, projection) and M (model) matrices are calculated like this:

// Model Matrix, converts to world space
mat4 scale = MakeScale(o->scale); 
mat4 translate = MakeTranslate(o->position);
mat4 rotate = MakeRotate(o->rotation);

// Move objects backward from the camera's position
mat4 cameraTranslate = MakeTranslate(vec3(-cameraPosition.x, -cameraPosition.y, -cameraPosition.z));

// Get the camera's rotated basis vectors to rotate everything to camera space.
vec3 Forward;
vec3 Right;
vec3 Up;
GetAxesFromRotation(cameraRotation, Forward, Right, Up);
mat4 cameraRotate = MakeLookAt(Forward, Up);

// Convert from camera space to perspective projection space
mat4 projection = MakePerspective(surf->w, surf->h, 1, 10, cameraFOV);

// Convert from projection space (-1, 1) to viewport space
mat4 viewport = MakeViewport(surf->w, surf->h);

mat4 M = translate * rotate * scale;
mat4 MVP = viewport * projection * cameraRotate * cameraTranslate * M;

Any idea what I'm doing wrong?

2

2 Answers

1
votes

You should be transforming the normals using the model matrix, not its inverse. Your lighting is behaving as it is because you are rotating the vertex normals in the opposite direction to the vertex positions.

vec4 normal =  M * embed<4>(model->normal(iFace, nthVert), 0.f);

To avoid such confusion, I would recommend using the naming scheme advocated by Tom Forsyth, and call M the world_from_object matrix, because it is the transformation from object space to world space.

vec4 light_world = vec4(0.f, 0.f, 1.f, 0.f);
vec4 normal_object = embed<4>(model->normal(iFace, nthVert), 0.f);
vec4 normal_world = world_from_object * normal_object;
float intensity = std::max(0.f, light_world * normal_world);

If you had used this scheme, it would have been clear that you were using the wrong transformation.

mat4 object_from_world = world_from_object.invert_transpose();
vec4 normal_world = object_from_world * normal_object; // wrong!

I personally use the following terminology to describe the different spaces:

  • object space – the local coordinate system of your model
  • view space – the local coordinate system of your camera
  • light space – the local coordinate system of your light
  • world space – the global coordinate system of your scene
  • clip space – the normalized screen coordinates

As such, I would call the MVP matrix the clip_from_object matrix.

1
votes

You're passing the Model matrix to the Shader as:

                o->shader->M = cameraRotate * cameraTranslate * Model;

So the actual M matrix is not Model Matrix but ModelView Matrix, i.e. you're now multiplying in ModelView Space. I'm not sure but probably that might be leading to ambiguous result.