2
votes

I'm having trouble understanding why the model and view matrices are traditionally combined together. I know that the less matrix multiplying you do in the vertex shader the better, but it makes much more sense to me to combine the projection and view matrices.

This is because they are both intrinsically camera properties. It makes sense to me to first transform vertices into world space with the model matrix, perform lighting etc., then use your combined camera matrix to translate to normalised clip space.

I know that I can do it that way if I want in a programmable pipeline, but I want to know why historically people combined the model and view matrices.

1
Have a look here: stackoverflow.com/questions/10617589/…. I thing the answer explains it realy niceBDL
I still don't get why you don't do lighting in world space and then go to clip space in one jump (I'd already read that page)Matt Randell
@ranmat11: That page has a link to an article that explains why in excruciating detail.Nicol Bolas

1 Answers

3
votes

In graphics programming, camera doesn't exist. It's always fixed at (0,0,0) and looking towards (0,0,-1). Camera as everyone knows it is totally artificial and mimics the way we are used to observing objects, as humans, and by that I mean: moving around, pivoting our head and such. To mimic that, cg introduces the concept of camera. It is interesting and well known that it is the same thing whether will you move to camera to the right or to move all other objects in the scene to the left. That invariance is then transffered onto modelMatrix by combining all transformations on object in one matrix - MVMatrix.

View and Projection matrices are seperated because those matrices do very different transformations. One is very similar to modelMatrix and represents 3d, in-space transformations, and the other is used for calculating angles from which the objects are viewed.