0
votes

I'm doing a simple openGL program that involves rendering to a depth texture offscreen. However I'm dealing with large depths that exceed what can be represented by a float's precision. As a result I need to use unsigned int for drawing my points. I run into two issues when I try to implement this.

1) Whenever I attempt to draw a VBO that uses unsigned int (screen coordinates) for drawing it doesn't fall within the -1 to 1 range so none of them draw to the screen. The only way I can find to fix this problem is by using a orthographic projection matrix to adjust it to draw to screen coordinates.
Is this understanding correct or is there an easier way to do it.
If it is correct how do you properly implement this for what I want.

2) Secondly when drawing this way is there any way to preserve the initial values (not converting them to floats when drawing) so they are no different when you read them back again, this is necessary because my objective is to create a depth buffer of random points with random depths up to 2^32. If this gets converted to floats precision is lost so the data read out again is not the same as what was put in.

1
I'm not sure what you mean with your first paragraph. The range of a float is far, far greater than that of an unsigned int. Issues with precision of floats only happen at larger distances from the screen, where the distances between individual objects need to be quite large to begin with to even be visible.Jongware
@RadLexus: He said "represented by a float's precision". So clearly this is a precision issue, not a range issue. The "large depths" part matters because perspective projection biases precision to the near plane. So things farther away get less precision.Nicol Bolas

1 Answers

1
votes

This is the wrong solution to the problem. To answer your question itself, gl_Position is a vec4. And therefore, the depth that OpenGL sees is a float. There's nothing you can do to change that, short of ignoring the depth buffer entirely and doing "depth tests" yourself in the fragment shader.

The preferred solution to the problem is to use a floating-point depth buffer. Using GL_DEPTH_COMPONENT_32F or something of the kind. But that alone is insufficient, due to an unfortunate legacy issue with how OpenGL defines its coordinate transforms. See, floats put a lot of precision into the range [0, 1], but it's biased closer to zero. But because of the way OpenGL defines its transforms, that precision gets lost along the way; effectively, the exponent part of the float never gets used. It makes a 32-bit float seem like a 24-bit fixed-point value.

OpenGL has fixed that problem with ARB_clip_control, which restores the ability to use full 32-bit floats effectively. You should attempt to employ that if possible.