2
votes

i'm trying to figure out what happens internally in a vertex-fragment shader pair. To be more specific i'm trying to compare depth values on the cpu. On of these depth values is calculated by a shader and correspond to the gl_FragCoord.z value. What i do know is that the depth output is in the range [0,1] and not linear. So does anyone know what happens to any given depth value between feeding the depth to the vertexshader and reading it out in the fragment shader? How does opengl convert depth into the range [0,1]?

Thanks a lot in advance!

2

2 Answers

3
votes

After applying matrix transformations, a vertex [v] is in the canonical view volume if it satisfies:

- v.w <= v.x <= v.w, - v.w <= v.y <= v.w, - v.w <= v.z <= v.w

after the perspective division by (w) :

-1 <= x' <= 1, -1 <= y' <= 1, -1 <= z' <= 1 (where: x' = v.x / v.w, etc.)

the normalized range [0, 1] of depth values are then given by:

depth = 0.5 * (z' + 1.0)
2
votes

In a simple OpenGL renderer the depth values will be generated by the matrix transformations in the vertex shader. In my case, my basic vertex shader does the following:

mat4 mvpMatrix = ProjectionMatrix * ViewMatrix * ModelMatrix;
vec4 pos = vec4(ObjectPosition.x, ObjectPosition.y, ObjectPosition.z, 1.0);

gl_Position = mvpMatrix * pos;

The three matrices that serve as input are controlled by my application; I have full control over them and could work out the process manually if I would like to.

Summarized: figure out which matrices you are sending to the vertex shader and do the math. The conversion happens in the vertex shader itself, and the most important matrix is in your case probably ProjectionMatrix, it takes care of the mapping to [0,1] because it has (at least it should have) knowledge of the viewport.