3
votes

I've been trying to create simple diffuse lighting shader for Android using OpenGL ES 2.0 and getting knowledge from OpenGL 4.0 Shading language cookbook but it doesn't tell much about normal matrix and I am pretty sure the problem comes from it because the "model" I've been using is working perfectly in WebGL where I can use that nice glMatrix lib that I can't find for Java.

I am not sure how can I get normal matrix from model view matrix but I read its just transpose of inverted 3x3 piece of modelview matrix, too bad Android Matrix class only lets you work with 4x4 matrices(right?) so I've been splitting the matrix up in the shader which is probably where I go wrong.

So what I do is simply this:

    float[] nMatrix = new float[4 * 4];
    Matrix.invertM(nMatrix, 0, mvMatrix, 0);
    Matrix.transposeM(nMatrix, 0, nMatrix, 0);

    glUniformMatrix4fv(shader.getUniformPointer("nMatrix"), 1, false, nMatrix, 0);

and then at my vertex shader I do this:

tNorm = normalize(mat3(nMatrix) * vNormal).xyz;

and the rest of the code is basically from the book and the results are below

enter image description here

As you can see some sides of the cube are compeltely dark, and I am sure I got all the normals even thought I don't know any Android GL debugger, but if you know one, feel free to tell me about it.

So the question is, how can I properly get the normal matrix from my modelview matrix?

2

2 Answers

4
votes

I don't where you could find a Matrix library in Java to help you with this.

But, as long as your modelView matrix does not contain non-uniform scales, you can safely use your modelView matrix instead of the normalMatrix.

This could help you get started and make sure that your problem is not hidden elsewhere.

0
votes

This might be useful

Port of some useful mat3 functions of glm library

package com.CosmicCreations;

public class Mat3x3 {
public static float determinant(float []m){
return 
        + m[0] * (m[4] * m[8] - m[7] * m[5])
        - m[3] * (m[1] * m[8] - m[7] * m[2])
        + m[6] * (m[1] * m[5] - m[4] * m[2]);
}
public static void Mat3(float []m4, float[]m){
m[0]=m4[0]; m[1]=m4[1]; m[2]=m4[2];
m[3]=m4[4]; m[4]=m4[5]; m[5]=m4[6];
m[6]=m4[8]; m[7]=m4[9]; m[8]=m4[10];
}
/*
    Inverse[0][0] = + (m[1][1] * m[2][2] - m[2][1] * m[1][2]);
    Inverse[1][0] = - (m[1][0] * m[2][2] - m[2][0] * m[1][2]);
    Inverse[2][0] = + (m[1][0] * m[2][1] - m[2][0] * m[1][1]);
    Inverse[0][1] = - (m[0][1] * m[2][2] - m[2][1] * m[0][2]);
    Inverse[1][1] = + (m[0][0] * m[2][2] - m[2][0] * m[0][2]);
    Inverse[2][1] = - (m[0][0] * m[2][1] - m[2][0] * m[0][1]);
    Inverse[0][2] = + (m[0][1] * m[1][2] - m[1][1] * m[0][2]);
    Inverse[1][2] = - (m[0][0] * m[1][2] - m[1][0] * m[0][2]);
    Inverse[2][2] = + (m[0][0] * m[1][1] - m[1][0] * m[0][1]);
    Inverse /= Determinant;
 */
public static void inverse(float []m, float[] Inverse, int offset){

float Determinant = Mat3x3.determinant(m);
Inverse[offset+0] = + (m[4] * m[8] - m[7] * m[5])/ Determinant;
Inverse[offset+3] = - (m[3] * m[8] - m[6] * m[5])/ Determinant;
Inverse[offset+6] = + (m[3] * m[7] - m[6] * m[4])/ Determinant;
Inverse[offset+1] = - (m[1] * m[8] - m[7] * m[2])/ Determinant;
Inverse[offset+4] = + (m[0] * m[8] - m[6] * m[2])/ Determinant;
Inverse[offset+7] = - (m[0] * m[7] - m[6] * m[1])/ Determinant;
Inverse[offset+2] = + (m[1] * m[5] - m[4] * m[2])/ Determinant;
Inverse[offset+5] = - (m[0] * m[5] - m[3] * m[2])/ Determinant;
Inverse[offset+8] = + (m[0] * m[4] - m[3] * m[1])/ Determinant;
}
public static void transpose(float []m, int offset, float[]result){
result[0] = m[offset+0];
result[1] = m[offset+3];
result[2] = m[offset+6];

result[3] = m[offset+1];
result[4] = m[offset+4];
result[5] = m[offset+7];

result[6] = m[offset+2];
result[7] = m[offset+5];
result[8] = m[offset+8];
}
}

It should be used like this -

// Invert + transpose of mvmatrix
    float []temp = new float[18];
    Mat3x3.Mat3(mMVMatrix, temp);
    Mat3x3.inverse(temp, temp, 9);
    Mat3x3.transpose(temp, 9, normalMatrix);