1
votes

In the vertex shader we usually create TBN matrix:

vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * Tangent.xyz);
vec3 b = normalize(gl_NormalMatrix * Bitangent.xyz);
mat3 tbn = mat3(t, b, n);

This matrix transform vertices from Tangent space into Eye/Camera space.

Now for normal mapping (done in forward rendering) we have two options:

  1. inverse tbn matrix and transform light_vector and view_direction and send those vectors to the fragment shader. After that those vectors are in Tangent space.
    • that way in the fragment shader we only need to read normal from a normal map. Since such normals are in Tangent space (by "definition") they match with out transformed light_vector and view_direction.
    • this way we do light calculations in Tangent space.
  2. pass the tbn matrix into fragment shader and then transform each normal read from a normal map by that. That way we transform such normal into View space.
    • this way we do light calculations in Eye/Camera space

Option 1 seems to be faster: we have most transformations in the vertex shader, only one read from normal map.

Option 2 requires to transform each normal from normal map by the TBN matrix. But it seems a bit simpler.

Questions:

Which option is better?

Is there any performance loss? (maybe texture read will "cover" costs of doing the matrix transformation)

Which option is more often used?

1
This question cannot be answered as asked. The answer will depend on a number of factors, such as the number of per-vertex parameters being interpolated, the amount of per-vertex processing going on relative to the per-fragment processing, and so forth.Nicol Bolas
OK, especially the performance cannot be measured easily... but what about visual difference? As I see those solutions should be equal?fen

1 Answers

3
votes

I'll tell you this much right now - depending on your application, option 1 may not even be possible.

In a deferred shading graphics engine, you have to compute the light vector in your fragment shader, which rules out option 1. You cannot keep the TBN matrix around when it comes time to do lighting in deferred shading either, so you would transform your normals into world-space or view-space (this is not favored very often anymore) ahead of time when you build your normal G-Buffer (the calculation of the TBN matrix can be done in the vertex shader and passed to the fragment shader as a flat mat3). Then you sample the normal map using the basis and write it in world space.

I can tell you from experience that the big name graphics engines (e.g. Unreal Engine 4, CryEngine 3, etc.) actually do lighting in world-space now. They also employ deferred shading, so for these engines neither option you proposed above is used at all :)


By the way, you are wasting space in your vertex buffer if you are actually storing the normal, binormal and tangent vectors. They are basis vectors for an orthonormal vector space, so they are all at right angles. Because of this, you can compute the third vector given any two by taking the cross product. Furthermore, since they are at right angles and should already be normalized you do not need to normalize the result of the cross product (recall that |a x b| = |a| * |b| * sin(a,b)). Thus, this should suffice in your vertex shader:


   // Normal and tangent should be orthogonal, so the cross-product
   //   is also normalized - no need to re-normalize.
   vec3 binormal = cross (normal,  tangent);
        TBN      = mat3  (tangent, binormal, normal);

This will get you on your way to world-space normals (which tend to be more efficient for a lot of popular post-processing effects these days). You probably will have to re-normalize the matrix if you intend to alter it to produce view-space normals.