1
votes

I have several puzzles about projection with near and far clipping plane.
I made a cube with front face Blue at +z axis and back face Red at -z axis, the side length of the cube is 0.5.
Make model and view matrix identity matrix, take orthogonal projection as example. when set near and far cliping plane negative values, the result face showed as follow:

orthogonal projection


What puzzled me :
1.OpenGL Projection Matrix declares, the near plane is the projection plane, is it right? what is the actual projection plane in OpenGL ?
2. when using glOrtho or something similar API:

glOrtho(GLdouble left,  GLdouble right, 
   GLdouble bottom,  GLdouble top,  
   GLdouble nearVal,  GLdouble farVal);   

the near clipping plane is always interpet as -nearVal, and far clipping plane interpret as -farVal.It has nothing to do with the camera lookat direction because the camera position and direction influence the view matrix, after view transformation, we can say camera in OpenGL is always point to -Z axis, that's why we interpret nearVal and farVal the way above, Am I right ?

3.In the picture above, when camera is within the viewing frustum, how to projection on to the near plane ? Please help to explain the result.

1

1 Answers

3
votes

what is the actual projection plane in OpenGL ?

There isn't one. OpenGL just does math. What projection you use depends on the math you provide, and therefore on the projection matrix you provide.

Am I right ?

Mostly. How the post-projection Z interprets what "closer" means depends on a lot of things. It depends on your projection matrix. But it also depends on your glDepthRange setting. That defaults to mapping the near plane to 0 and the far plane to 1. Then, there's the depth test setting. The default is GL_LESS.

If you use the default values for the depth range and depth test, then yes, what you said is correct.

In the picture above, when camera is within the viewing frustum, how to projection on to the near plane ? Please help to explain the result.

Projection always goes in the direction of the near plane. That means objects closer to the near plane will have smaller depth values than those farther (assuming the default glDepthRange). So when you reverse the direction of the two planes, you reverse which objects are "closer".

Therefore:

  1. Rendering without depth tests doesn't change between the two because which object is "closer" is not used. Without depth testing, the only thing that matters is the order of rendering.

  2. Rendering with the depth test means that which object is closer will be taken into account. So if there is overlap, the farthest object will not be visible.

  3. Face culling is based on the apparent winding order of an object's vertices in post-projection window space. Changing the near/far plane order effectively inverts the Z of the window space coordinates. But the winding order is not based on the window-space Z; it only looks at the 2D projection of the coordinates. Therefore inverting Z does not affect the winding order.

    That is why your blue object won out in the third test.

    There is a difference between rotating a space by 180 degrees and inverting the space. The latter would be the equivalent of a scale by -1. That represents a change in handedness of the space, which you can't do by strict rotation. By negating the Z as you do here, you're inverting the space.

Your "camera origin" is more or less irrelevant for orthographic projections. The fact that a triangle is "behind" the camera is irrelevant to projection math. As long as it is within the projection region, it will be visible.