The discontinuity you mentioned, the "flipping", that comes from using a fixed "up" direction in your lookat
matrix construction. If the "eye" direction is in the direction of "up", the lookat
matrix behaves oddly. So your code would need to adjust the "up" direct appropriately.
But the basic problem of it not rotating the way you want it to rotate? That comes from using the wrong tool to solve the problem, which also comes from lacking a full understand of the problem itself.
I'm going to assume that you want to implement something like the mouse rotation controls in 3D modelling applications like Maya or Blender3D. If you pay attention, you'll notice that the orientation of the viewer is actually taken into account by such mouse controls. If you change the angle at which you're viewing the object, the rotation you apply to that object changes.
Or to put it another way, you want to control the orientation of the object relative to the camera. But the matrix you want to finally set is a rotation relative to the world.
So here's what you do. You need to generate an orientation offset based on the mouse movements. This is relative to the camera. You then need to transform that orientation offset to be relative to the world. And then, you apply that to the object's current orientation.
Spherical coordinates can't do this. They deal in angles that are relative to the world. And transforming such angles is difficult if not impossible. Instead, you need to work with orientations directly, not with angles.
That means either matrices or quaternions.
The unofficial GL SDK I wrote has a class that does this. The key piece of code is this:
void ObjectPole::RotateViewDegrees( const glm::fquat &rot, bool bFromInitial )
{
if(!m_bIsDragging)
bFromInitial = false;
if(m_pView)
{
glm::fquat viewQuat = glm::quat_cast(m_pView->CalcMatrix());
glm::fquat invViewQuat = glm::conjugate(viewQuat);
m_po.orientation = glm::normalize((invViewQuat * rot * viewQuat) *
(bFromInitial ? m_startDragOrient : m_po.orientation));
}
else
RotateWorldDegrees(rot, bFromInitial);
}
The input to the function is a quaternion representing the rotation delta, computed based on how much the mouse moved between frames. This function's job is to apply that delta to the object's current orientation. However, the delta is in camera space (which is the space the user is seeing). Since the stored object's orientation is in world space, we need to transform the delta into world space before applying it to the object's orientation.
That is the job of invViewQuat * rot * viewQuat
. The reason why that math works is a bit complex to go into here.
eye
vector anywhere? The way the code is shown here,eye_x
,eye_y
andeye_z
are calculated, but then never used. – Reto Koradi