1
votes

I do some basic stuff

1) calculate my position
2) store it into "gl_Position"
3) then i store my depth into a vec2 based on my position info

gl_Position = vec4( vVertexPos, 1 ) * mMVP;
vDepth = gl_Position.zw;

vDepth is a variable above in my shader that i pass to my fragment shader

out vec2 vDepth;

in the fragment shader i store the fragment depth in gl_FragDepth

gl_FragDepth = vDepth.x / vDepth.y;

my depth values are all near the value of 1. Am I doing something incorrect? Might I be missing a step that will give me my linear depth values? when i calculate my camera info my near clipping is 0.01 and my far clipping is 200.0f i need the near to be around that clipping space.

2
my normal rendering process stuff works just fine. I out put models no problem. I'm wondering what I might be missing to make my depths look more gradient.Franky Rivera

2 Answers

4
votes

The closer the near plane is to the viewer, the faster your depth buffer precision gets used closer to the viewer. Pushing your near plane out from the eye, but leaving the same distance to your far plane will even out your depth precision throughout your view volume. Your vDepth is also being interpolated in a perspective-correct method according to 1/w.

1
votes

You shouldn't set gl_FragDepth in the fragment shader unless you want something other than the default behavior (ie, gl_FragDepth = gl_FragCoord.z)

See: http://www.arcsynthesis.org/gltut/Illumination/Tut13%20Deceit%20in%20Depth.html for why this is bad for perf.