10
votes

I've looked around and have never seen, nailed down, exactly what each matrix does and what operations form them (so the actual eigen function calls). This is what I'm looking for. Or at least a description of the process and a couple examples with eigen functions to see generally how to do it! Anyway, here's some details in case they are useful:

I'm setting up a top-down perspective game (so the camera is fixed downward but can rotate and move along the XY plane), but since I'll have some 3D elements (along with some things that are strictly 2D) I think a perspective projection would work well. But I do wonder what commands would be necessary to form an orthographic projection...

I sorta understand view, which would be done by translating the camera coords to the origin, rotating by camera rotation, translating them back to where they were, then scaling for zoom? But exactly what functions and objects would be involved, I'm not sure.

And for storing the rotation of any given object, a quaternion appears to be the be the best choice. So would that determine the model projection? If I manage to get my rotation simplified to the 2D case of one angle, would quaternions then be wasteful?

And do these matrices need to all be regenerated from identity each frame? Or can they be altered somehow to fit the new data?

I would really prefer to use eigen for this instead of a hand-holding library, but I need something to work with to figure out exactly what is going on... I have all the GLSL setup and the uniform matrices being fed into the rendering with my VAOs, I just need to understand and make them.

edit:
My vertex shader uses this standard setup with 3 uniform mat4s being multiplied with a position vec3:

gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);

Can mat3s and a vec2 be used for position to achieve better performance in purely 2D cases?

2
gl_Position = projectionMatrix * modelMatrix * viewMatrix - This doesn't look like any standard I've ever seen. Generally it's proj * view * model, yours looks backward.Tim

2 Answers

14
votes

Here is an example of a lookAt and setPerspective function creating the view and projection matrices from simple inputs:

void Camera::lookAt(const Eigen::Vector3f& position, const Eigen::Vector3f& target, const Eigen::Vector3f& up)
{
  Matrix3f R;
  R.col(2) = (position-target).normalized();
  R.col(0) = up.cross(R.col(2)).normalized();
  R.col(1) = R.col(2).cross(R.col(0));
  mViewMatrix.topLeftCorner<3,3>() = R.transpose();
  mViewMatrix.topRightCorner<3,1>() = -R.transpose() * position;
  mViewMatrix(3,3) = 1.0f;
}

void Camera::setPerspective(float fovY, float aspect, float near, float far)
{
  float theta = fovY*0.5;
  float range = far - near;
  float invtan = 1./tan(theta);

  mProjectionMatrix(0,0) = invtan / aspect;
  mProjectionMatrix(1,1) = invtan;
  mProjectionMatrix(2,2) = -(near + far) / range;
  mProjectionMatrix(3,2) = -1;
  mProjectionMatrix(2,3) = -2 * near * far / range;
  mProjectionMatrix(3,3) = 0;
}

You can then specify the matrices to GL:

glUniformMatrix4fv(glGetUniformLocation(mProgram.id(),"mat_view"), 1, GL_FALSE, mCamera.viewMatrix().data());
glUniformMatrix4fv(glGetUniformLocation(mProgram.id(),"mat_proj"), 1, GL_FALSE, mCamera.projectionMatrix().data());

For the model transformation (it is better to keep the view and model separated) you can use the Geometry module with the Scaling, Translation, and Quaternion classes to assemble an Affine3f object.

0
votes

Shaders runs for every vertex supplied to the rendering pipeline. To get the best performance usually you perform the "uniform" operations on CPU, pass the elaborated information to every shader instance using uniforms and then run...

In the example you have provided, it's better to compute only mat4 * vec4 instead of mat4 * mat4 * mat4 * vec4, indeed:

gl_Position = modelviewprojectionMatrix * vec4(in_Position, 1.0);

Where modelviewprojectionMatrix is the result of projectionMatrix * viewMatrix * modelMatrix. The matrix arithmetic is implemented at CPU side, for each set of vertices that you need to render.

How do you organize the data necessary for deriving the model-view-projection matrices it's up to your requirements. The actual performance depends on the the scene graph to be rendered; for example, if you do only translations (maybe only on XY plane), vector-only translations is possible, generating matrices when they are needed.

Matrices are multiplied with standard algebraic operation. Matrices can be model matrices or projection matrices. Transformations can be concatenated by multiplying two transformation matrices.