0
votes

I'm trying to create an app for Android using OpenGL ES, but I'm having trouble handling touch input.

I've created a class CubeGLRenderer which spawns a Cube. CubeGLRenderer is in charge of the projection and view matrix, and Cube is in charge of its model matrix. The Cube is moving along the positive X axis, with no movement in Y nor Z.

CubeGLRenderer updates the view matrix each frame in order to move along with the cube, making the cube look stationary on screen:

Matrix.setLookAtM(mViewMatrix, 0, 0.0f, cubePos.y, -10.0f, 0.0f, cubePos.y, 0.0f, 0.0f, 1.0f, 0.0f);

The projection matrix is calculated whenever the screen dimension changes (i.e. when the orientation of the device changes). The two matrices are then muliplied and passed to Cube.draw() where it applies its model matrix and renders itself to screen.

So far, so good. Let's move on to the problem.

I want to touch the screen and calculate an angle from the center of the cube's screen coordinates to the point of the screen that I touched.

I thought I'd just accomplish this using GLU.gluProject(), but I'm either not using it correctly or simply haven't understood it at all.

Here's the code I use to calculate the screen coordinates from the cube's world coordinates:

public boolean onTouchEvent(MotionEvent e) {
    Vec3 cubePos = cube.getPos();
    float[] modelMatrix = cube.getModelMatrix();
    float[] modelViewMatrix = new float[16];
    Matrix.multiplyMM(modelViewMatrix, 0, mViewMatrix, 0, modelMatrix, 0);
    int[] view = {0, 0, width, height};
    float[] screenCoordinates = new float[3];
    GLU.gluProject(cubePos.x, cubePos.y, cubePos.z, modelViewMatrix, 0, mProjectionMatrix, 0, view, 0, screenCoordinates, 0);


    switch (e.getAction()) {
        case MotionEvent.ACTION_DOWN:
            Log.d("CUBEAPP", "screenX: " + String.valueOf(screenCoordinates[0]));
            break;
    }

    return true;
}

What am I doing wrong?

1
What is your output like? - Matic Oblak
You do not seem to use the modelViewMatrix matrix but still modelMatrix although you compute it. - Matic Oblak
Sorry I never used this but if I understand the method correctly you need to use the modelViewMatrix to begin with. Other possible problems are you should try swapping the multiplication order to get the model view matrix (order matters when multiplying matrices). Since many systems work the way that origin is on bottom left you might need to use (0, height, width, -height) for the view coordinates or invert the touch horizontal coordinate when using it with received data as (height-touch.y). - Matic Oblak
Not using modelViewMatrix despite calculating it was just me being careless. It still doesn't work when using the correct matrix though. Changing the order of the multiplied matrices doesn't do me any good either (I'm familiar with the actual math, what I've written should be the correct order, I think). - Abu Hassan
Also, here's a typical output: screenX: 5347.3594 screenX: -15742.4 screenX: -2774.0396 screenX: -1227.8964 screenX: -577.35376 screenX: -49.207764 - Abu Hassan

1 Answers

0
votes

The same calculation you do in the vertex shader you use to render the cube should be used to translate the cube center into the screen space.

Normally you would multiply each each vertex of the cube by the modelViewProjection matrix and then send it to the fragment shader. You should use the exact same matrix you use in the vertex shader and multiply the center of the cube with it. However, multiplying a 4x4 matrix with a Vec4 vertex (x, y, z, 1) would give you a Vec4 result of (x2, y2, z2, w). In order to get the screen space coordinates you need to divide x2 and y2 by w! After you divide it by w your xy coordinates are suppose to be within [-1..1]x[-1..1] range.

In order to get the exact pixel you would need to normalize x2/w and y2/w into [0..1] and then multiply it by the screen resolution width and height.

Hope this helps.