5
votes

I am trying to implement zooming on a canvas which should focus on a pivot point. Zooming works fine, but afterwards the user should be able to select elements on the canvas. The problem is, that my translation values seem to be incorrect, because they have a different offset, than the ones where I don't zoom to the pivot point (zoom without pivot point and dragging works fine). I used some code from this example.

The relevant code is:

class DragView extends View {

private static float MIN_ZOOM = 0.2f;
private static float MAX_ZOOM = 2f;

// These constants specify the mode that we're in
private static int NONE = 0;
private int mode = NONE;
private static int DRAG = 1;
private static int ZOOM = 2;
public ArrayList<ProcessElement> elements;

// Visualization
private boolean checkDisplay = false;
private float displayWidth;
private float displayHeight;
// These two variables keep track of the X and Y coordinate of the finger when it first
// touches the screen
private float startX = 0f;
private float startY = 0f;
// These two variables keep track of the amount we need to translate the canvas along the X
//and the Y coordinate
// Also the offset from initial 0,0
private float translateX = 0f;
private float translateY = 0f;

private float lastGestureX = 0;
private float lastGestureY = 0;

private float scaleFactor = 1.f;
private ScaleGestureDetector detector;
...

private void sharedConstructor() {
    elements = new ArrayList<ProcessElement>();
    flowElements = new ArrayList<ProcessFlow>();
    detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}

/**
 * checked once to get the measured screen height/width
 * @param hasWindowFocus
 */
@Override
public void onWindowFocusChanged(boolean hasWindowFocus) {
    super.onWindowFocusChanged(hasWindowFocus);
    if (!checkDisplay) {
        displayHeight = getMeasuredHeight();
        displayWidth = getMeasuredWidth();
        checkDisplay = true;
    }
}

@Override
public boolean onTouchEvent(MotionEvent event) {
    ProcessBaseElement lastElement = null;

    switch (event.getAction() & MotionEvent.ACTION_MASK) {
        case MotionEvent.ACTION_DOWN:
            mode = DRAG;

            // Check if an Element has been touched.
            // Need to use the absolute Position that's why we take the offset into consideration
            touchedElement = isElementTouched(((translateX * -1) + event.getX()) / scaleFactor, (translateY * -1 + event.getY()) / scaleFactor);


                //We assign the current X and Y coordinate of the finger to startX and startY minus the previously translated
                //amount for each coordinates This works even when we are translating the first time because the initial
                //values for these two variables is zero.
                startX = event.getX() - translateX;
                startY = event.getY() - translateY;
            }
            // if an element has been touched -> no need to take offset into consideration, because there's no dragging possible
            else {
                startX = event.getX();
                startY = event.getY();
            }

            break;

        case MotionEvent.ACTION_MOVE:
            if (mode != ZOOM) {
                if (touchedElement == null) {
                    translateX = event.getX() - startX;
                    translateY = event.getY() - startY;
                } else {
                    startX = event.getX();
                    startY = event.getY();
                }
            }

            if(detector.isInProgress()) {
                lastGestureX = detector.getFocusX();
                lastGestureY = detector.getFocusY();
            }

            break;

        case MotionEvent.ACTION_UP:
            mode = NONE;

            break;
        case MotionEvent.ACTION_POINTER_DOWN:
            mode = ZOOM;

            break;
        case MotionEvent.ACTION_POINTER_UP:
            break;
    }

    detector.onTouchEvent(event);
    invalidate();

    return true;
}

private ProcessBaseElement isElementTouched(float x, float y) {
    for (int i = elements.size() - 1; i >= 0; i--) {
        if (elements.get(i).isTouched(x, y))
            return elements.get(i);
    }
    return null;
}

@Override
public void onDraw(Canvas canvas) {
    super.onDraw(canvas);

    canvas.save();

    if(detector.isInProgress()) {
        canvas.scale(scaleFactor,scaleFactor,detector.getFocusX(),detector.getFocusY());
    } else
        canvas.scale(scaleFactor, scaleFactor,lastGestureX,lastGestureY);     // zoom

//        canvas.scale(scaleFactor,scaleFactor);

    //We need to divide by the scale factor here, otherwise we end up with excessive panning based on our zoom level
    //because the translation amount also gets scaled according to how much we've zoomed into the canvas.
    canvas.translate(translateX / scaleFactor, translateY / scaleFactor);

    drawContent(canvas);

    canvas.restore();
}

/**
 * scales the canvas
 */
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
    @Override
    public boolean onScale(ScaleGestureDetector detector) {
        scaleFactor *= detector.getScaleFactor();
        scaleFactor = Math.max(MIN_ZOOM, Math.min(scaleFactor, MAX_ZOOM));
        return true;
    }
}
}

Elements are saved with their absolute position on the canvas (with dragging in mind). I suspect that I don't take the new offset from the pivot point to translateX and translateY in consideration, but I can't figure out where and how I should do this. Any help would be appreciated.

1

1 Answers

15
votes

Okay, so you're basically trying to figure out where a certain screen X/Y coordinate corresponds to, after the view has been scaled (s) around a certain pivot point {Px, Py}.

So, let's try to break it down.

For the sake of argument, lets assume that Px & Py = 0, and that s = 2. This means the view was zoomed by a factor of 2, around the top left corner of the view.

In this case, the screen coordinate {0, 0} corresponds to {0, 0} in the view, because that point is the only point which hasn't changed. Generally speaking, if the screen coordinate is equal to the pivot point, then there is no change.

What happens if the user clicks on some other point, lets say {2, 3}? In this case, what was once {2, 3} has now moved by a factor of 2 from the pivot point (which is {0, 0}), and so the corresponding position is {4, 6}.

All this is easy when the pivot point is {0, 0}, but what happens when it's not?

Well, lets look at another case - the pivot point is now the bottom right corner of the view (Width = w, Height = h - {w, h}). Again, if the user clicks at the same position, then the corresponding position is also {w, h}, but lets say the user clicks on some other position, for example {w - 2, h - 3}? The same logic occurs here: The translated position is {w - 4, h - 6}.

To generalize, what we're trying to do is convert the screen coordinates to the translated coordinate. We need to perform the same action on this X/Y coordinate we received that we performed on every pixel in the zoomed view.

Step 1 - we'd like to translate the X/Y position according to the pivot point:

X = X - Px
Y = Y - Py

Step 2 - Then we scale X & Y:

X = X * s
Y = Y * s

Step 3 - Then we translate back:

X = X + Px
Y = Y + Py

If we apply this to the last example I gave (I will only demonstrate for X):

Original value: X = w - 2, Px = w
Step 1: X <-- X - Px = w - 2 - w = -2
Step 2: X <-- X * s = -2 * 2 = -4
Step 3: X <-- X + Px = -4 + w = w - 4

Once you apply this to any X/Y you receive which is relevant prior to the zoom, the point will be translated so that it is relative to the zoomed state.

Hope this helps.