0
votes

I am confused on how the OpenGL coordinate system works. I know you start with object coordinates -- everything defined in its own system. Then by applying a matrix, the coordinates change to world coordinates. By applying another matrix, you have view coordinates. Then if you're working in 3D, you can apply a perspective matrix. In the end, you are left with a set of coordinates which likely are not from [-1, 1]. How does OpenGL know how to normalize them from [-1, 1]? How does it know what to clip them out? In the shader, glPosition is just given your coordinates, it doesn't know that there have been through several transformations. I know that a view to normalized coordinate matrix involves a translation and a scale, but we never explicitly make a matrix for that in OpenGL. Does OpenGL use its own hidden matrix to translate from coordinates passed to glPostion to normalized coordinates?

1
There are no "hidden" matrices the only thing there is is the perspective divide.LJᛃ
@LJᛃ What if there is no perspective divide, for example, an orthographic projection. How does OpenGL know how to normalize the coordinates?foobar5512
@LJᛃ I think I see. So the orthographic projection matrix is more than just an identity matrix? In my Computer Graphics class, we just said the orthographic projection matrix is an identity matrix? The coordinates are normalized by the orthographic projection matrix?foobar5512
I feel like you're misunderstanding something here, there is no need to normalize anything. In the end all transforms whether they're labeled world, view or projection are user defined and introduced, you can render without any of them. The goal here is to get things into clip space so that they appear on the screen, using matrices is just a convenient way to do so.LJᛃ

1 Answers

0
votes

Deprecated fixed function vertex transformations are explained in https://www.opengl.org/wiki/Vertex_Transformation

Shader based rendering is likely to use same or very similar math for each transformation step. The missing step between glPosition and device coordinates is perspective divide (like LJ commented quickly) where xyzw coordinates are converted to xyz coordinates. xyzw coordinates are homogeneous coordinates for 3-dimensional coordinates that use 4 components to represent a location.

https://en.wikipedia.org/wiki/Homogeneous_coordinates