0
votes

I'm implementing the PHONG shading model in OpenGL. I need the normal, viewer direction, and the light direction for each fragment. A lot of demos pass in these vectors in world coordinates from vertex shader. Maybe it's because there isn't much difference between the normalized world coordinate vectors and the normalized perspective coordinate vectors?

I'm thinking for the "true" PHONG solution, these vectors should be transformed to be perspective coordinate system in vertex shader and then perform the .w divide in fragment shader because they are not the gl_position. Is this thinking correct?

Edit: From this link seems to suggest OpenGl's varying qualifier requires the original 'Z' coordinate of the fragment to perform correct perspective interpolation. See https://www.opengl.org/wiki/Type_Qualifier_(GLSL)#Interpolation_qualifiers

So the question I'm wondering can OpenGL derive the Z-value from the depth value? Edit: Yes it can. Getting the true z value from the depth buffer

1

1 Answers

0
votes

First, you cannot forgo the division-by-W step. Why? Because it's hard-wired. It happens as part of OpenGL's fixed-functionality. The gl_Position your last vertex processing step generates will have its W component divided into the other three.

Now, you could try to trick your way around that, by sticking 1.0 in the gl_Position's W, and passing it as some unrelated output. But the W component is a crucial part of perspective-correct interpolation. By faking your transforms this way, you lose that.

And that's kinda important. So unless you intend to re-interpolate all of your per-vertex outputs in the FS and perform perspective-correct interpolation, this just isn't going to work.

Second, post-projective space, when using a perspective projection, is a non-linear transformation, relative to world space. This means that parallel lines are no longer parallel. This also means that vector directions don't point at what they used to point at. So your light direction doesn't necessarily point at where your light is.

Oh, and distances are not linear either. So light attenuation no longer makes sense, since the attenuation factors were designed in a space linearly equivalent to world space. And post-projection space is not.

Here's an image to give you an idea of what I'm talking about:

World vs. Projection

What you see on the left is a rendering in world space. What you see on the right is the same scene as on the left, only viewed in post-projection space.

That is not a reasonable space to do lighting in.