1
votes

For picking objects, I've implemented a ray casting algorithm similar to what's described here. After converting the mouse click to a ray (with origin and direction) the next task is to intersect this ray with all triangles in the scene to determine hit points for each mesh.

I have also implemented the triangle intersection test algorithm based on the one described here. My question is, how should we account for the objects' transforms when performing the intersection? Obviously, I don't want to apply the transformation matrix to all vertices and then do the intersection test (too slow).

EDIT:
Here is the UnProject implementation I'm using (I'm using OpenTK by the way). I compared the results, they match what GluUnProject gives me:

private Vector3d UnProject(Vector3d screen)
{
    int[] viewport = new int[4];
    OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);

    Vector4d pos = new Vector4d();

    // Map x and y from window coordinates, map to range -1 to 1 
    pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
    pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
    pos.Z = screen.Z * 2.0f - 1.0f;
    pos.W = 1.0f;

    Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(GetModelViewMatrix() * GetProjectionMatrix()));
    Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);

    return pos_out / pos2.W;
}  

Then I'm using this function to create a ray (with origin and direction):

private Ray ScreenPointToRay(Point mouseLocation)
{
    Vector3d near = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 0));
    Vector3d far = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 1));

    Vector3d origin = near;
    Vector3d direction = (far - near).Normalized();
    return new Ray(origin, direction);
} 
2

2 Answers

2
votes

You can apply the reverse transformation of each object to the ray instead.

1
votes

I don't know if this is the best/most efficient approach, but I recently implemented something similar like this:

In world space, the origin of the ray is the camera position. In order to get the direction of the ray, I assumed the user had clicked on the near plane of the camera and thus applied the 'reverse transformation' - from screen space to world space - to the screen space position

( mouseClick.x, viewportHeight - mouseClick.y, 0 )

and then subtracted the origin of the ray, i.e. the camera position, from the now transformed mouse click position.

In my case, there was no object-specific transformation, meaning I was done once I had my ray in world space. However, transforming origin & direction with the inverse model matrix would have been easy enough after that.

You mentioned that you tried to apply the reverse transformation, but that it didn't work - maybe there's a bug in there? I used a GLM - i.e. glm::unProject - for this.