2
votes

I'm trying to take the opengl es coordinate system and convert it to the screen coordinate system. This will enable me to find vertex positions on the screen and will make collision detection easier. So far I've found the vertices and multiplied them by the model view projection matrix and then using the height and width of the android screen (units in pixels), I've convert the vertices into the correct range. I had to convert because the origin for opengl es coordinate system is at the center of the screen and goes from -1 to 1. While the origin for screen is from bottom left corner of screen, and when the device is in landscape mode I've found the dimensions to be 800x480 pixels.

Here is the problem, when I look at the range of values a vertex can have while on the screen it ranges from about 147 - 640 width and 185 - 480 height. It should range from 0-800 width and 0-480 height.

What is wrong with my code? Is it the model view projection matrix, or perhaps I'm using the wrong screen measurements, or could it be the way I've converted from opengl es range to screen pixel range.

This find the shape vertex, multiplies it by mvp matrix and converts rage of values so they are with pixel screen dimensions. I'm only checking one of the shape vertices.

        dimension[0] = MyGLSurfaceView.width;
        dimension[1] = MyGLSurfaceView.height;

        float starW;
        float starH;

        for (int i = 0; i < star.vertices.length; i += star.vertices.length) {//only checking one vertex

            Matrix.multiplyMM(starVerts, 0, mMVPMatrix, 0, star.vertices, 0);//vertices multiplied by model view projection matrix

           //starVerts[i] is in the range .433 to -.466 should be 1 to -1
           //starVert[i+1] is in the range .973 to -.246  should be 1 to -1

            starW = (starVerts[i] * (dimension[0] / 2)) + (dimension[0] / 2);//should be range 0-800 // instead 147 - 640
            starH = (starVerts[i + 1] * (dimension[1] / 2)) + (dimension[1] / 2);//should be range 0-480 // instead 185 - 480

This is how I found the mvp matrix

        @Override
public void onSurfaceChanged(GL10 unused, int width, int height) {

    GLES20.glViewport(0, 0, width, height);// Sets the current view port to the new size.

    float RATIO = (float) width / height;
    Matrix.frustumM(mProjectionMatrix, 0, -RATIO, RATIO, -1, 1, 3, 7);// this projection matrix is applied to object coordinates in the onDrawFrame() method
}

@Override
public void onDrawFrame(GL10 unused) {
    Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);// Set the camera position (View matrix)
    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);// Calculate the projection and view transformation

This is how I found the dimensions of the android screen in pixels

Display display = ((WindowManager)
                context.getSystemService(Context.WINDOW_SERVICE))
                .getDefaultDisplay();
        Point size = new Point();
        display.getSize(size);
        height = size.y;
        width = size.x;

Update since derhass answer

starVerts[i] = starVerts[i]/starVerts[i+3];    //clip.x divided by clip.w
starVerts[i+1] = starVerts[i+1]/starVerts[i+3];//clip.y divided by clip.w
1

1 Answers

1
votes

You are missing a crucial step. When you multiply the ModelViewProjection matrix by some vector, you transform this vector from object space to clip space.

The next step GL is doing is transforming to normalized device coordinates, where the viewing volume is the unit cube [-1,1] in all 3 dimensions. To get from clip space to normalized device space, the clip space x,y and z values are divided by the clip space w value (this finally creates the perspective effect, all other operations in the transformation chain are just linear). This is the step you have forgotten.

However, just adding this might not solve your issue completely (depending on the scene). The problem is that the GL will do something else before going from clip space to NDC: clipping, that is intersecting the primitive (and this operation works on whole primitives, not on separate vertices, obviously) with the viewing volume (which is just [-w,w]^3 at this point, just that this w varies per vertex).

If you just want to deal with single points, you can check if the relation -w <= x,y,z <= w is satisfied. If not, you can discard the point. But if you deal with lines or triangles, the stuff becomes more compliacted. You basically will have to implement full-fledged clipping, otherwise you will get meaningless data if the primitive you are drawing has at least one vertex which is in front of the camera and at least on vertex which is behind the camera (or exactly on the camera plane, which would yield w=0 and a division by zero). If you do not handle clipping, the actual fragments generated may lie outside of the 2D bounding box spanned by projecting all original vertices of the primitive.