4
votes

I'm building a LIDAR simulator in OpenGL. This means that the fragment shader returns the length of the light vector (the distance) in place of one of the color channels, normalized by the distance to the far plane (so it'll be between 0 and 1). In other words, I use red to indicate light intensity and blue to indicate distance; and I set green to 0. Alpha is unused, but I keep it at 1.

Here's my test object, which happens to be a rock:

enter image description here

I then write the pixel data to a file and load it into a point cloud visualizer (one point per pixel) — basically the default. When I do that, it becomes clear that all of my points are in discrete planes each located at a different depth:

Point cloud visualization

I tried plotting the same data in R. It doesn't show up initially with the default histogram because the density of the planes is pretty high. But when I set the breaks to about 60, I get this:

Density plot of distances (about 60 breaks).

I've tried shrinking the distance between the near and far planes, in case it was a precision issue. First I was doing 1–1000, and now I'm at 1–500. It may have decreased the distance between planes, but I can't tell, because it means the camera has to be closer to the object.

Is there something I'm missing? Does this have to do with the fact that I disabled anti-aliasing? (Anti-aliasing was causing even worse periodic artifacts, but between the camera and the object instead. I disabled line smoothing, polygon smoothing, and multisampling, and that took care of that particular problem.)

Edit

These are the two places the distance calculation is performed:

  • The vertex shader calculates ec_pos, the position of the vertex relative to the camera.
  • The fragment shader calculates light_dir0 from ec_pos and the camera position and uses this to compute a distance.

Is it because I'm calculating ec_pos in the vertex shader? How can I calculate ec_pos in the fragment shader instead?

1
Have you tried using a floating-point depth buffer (default is fixed-point)? Usually there is not a lot of benefit to doing so save for a few special applications, but it looks like there may be in this case. Getting one could be a little difficult, they're easy to get using FBOs but using the default framebuffer you will have to look into your platform's window system API. I'm guessing it's either CGL or NSOpenGL in this case since this is running on OS X?Andon M. Coleman
I also have to wonder how you are getting these distance values back? Is it actually the value stored in the depth buffer you are concerned about here, or is it one of the color components you computed? The color buffer is generally 8-bit fixed-point per-component as well. If you try to cram a depth value into there, it is going to exhibit weird things like this because you have given up at least 8-bits worth of precision (assuming a 16-bit depth buffer).Andon M. Coleman
Changing depth clip planes does not correlate with camera position. Try setting planes to your objects position +/- radiusKromster
@AndonM.Coleman To be clear, I'm not using the depth buffer to calculate the distance for the purposes of the simulation. I tried that, and it actually produced some even stranger discontinuities. The distance is calculated manually, and I've included the links to the shaders now. I calculate a distance and store it in the blue channel.Translunar
@KromStern could you please clarify your comment? I'm not understanding what you're suggesting.Translunar

1 Answers

4
votes

There are several possible issues I can think of.

(1) Your depth precision. The far plane has very little effect on resolution; the near plane is what's important. See Learning to Love your Z-Buffer.

(2) The more probable explanation, based on what you've provided, is the conversion/saving of the pixel data. The shader outputs floating point values, but these are stored in the framebuffer, which will typically have only 8bits per channel. For color, what that means is that your floats will be mapped to the underlying 8-bit (fixed width, integer) representation, therefore only possessing 256 values.

If you want to output pixel data as the true floats they are, you should make a 32-bit floating point RGBA FBO (with e.g. GL_RGBA32F or something similar). This will store actual floats. Then, when your data from the GPU, it will return the original shader values.

I suppose you could alternately encode a single float in a vec4 with some multiplication, if you don't have a FBO implementation handy.