1
votes

enter image description here

So I've got a stripped down ViewController example that reproduces this behavior which I can't seem to wrap my head around. I'm using a UIPanGestureRecognizer with some code that correctly allows a UIView to be moved around the window. However, if I update the frame, even to itself, the view jumps to an unexpected location, presumably based on the view's center.

I have two questions about this code. First, why is the view's center the position of the last touch (basically the anchorPoint mapped to the view's bounds), and apparently not in any way related to the center of the view's frame?

Second, why when changing the frame property to the current value does the view move? It seems to me that the view's center and frame are only partly correlated and I'm not sure what I'm missing.

This example just creates a basic test view controller in the app delegate:

TestViewController *vc = [[TestViewController alloc] init];
self.window.rootViewController = vc;

The TestViewController.m:

#import "TestViewController.h"
#import <QuartzCore/QuartzCore.h>

@implementation TestViewController

- (void)loadView
{
    self.view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 1024, 748)];
    self.view.backgroundColor = UIColor.redColor;

    UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(onPan:)];
    pan.maximumNumberOfTouches = 1;
    [self.view addGestureRecognizer:pan];
}

- (void)onPan:(UIPanGestureRecognizer*)recognizer
{
    if (recognizer.state == UIGestureRecognizerStateBegan) {
        UIView *view = recognizer.view;
        CGPoint locationInView = [recognizer locationInView:view];
        CGPoint locationInSuperview = [recognizer locationInView:view.superview];

        view.layer.anchorPoint = CGPointMake(locationInView.x / view.bounds.size.width, locationInView.y / view.bounds.size.height);
        view.center = locationInSuperview;
    }

    if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged) {

        CGPoint translation = [recognizer translationInView:[recognizer.view superview]];
        [recognizer.view setCenter:CGPointMake(recognizer.view.center.x + translation.x, recognizer.view.center.y+translation.y)];
        [recognizer setTranslation:CGPointZero inView:[recognizer.view superview]];
    }

    if(recognizer.state == UIGestureRecognizerStateEnded) {

        NSLog(@"AnchorPoint: %@", NSStringFromCGPoint(self.view.layer.anchorPoint));
        NSLog(@"Center: %@", NSStringFromCGPoint(self.view.center));
        NSLog(@"Frame: %@", NSStringFromCGRect(self.view.frame));

        self.view.frame = self.view.frame;
    }
}

@end

When the pan gesture ends, if I remove the following line, the pan works as expected:

self.view.frame = self.view.frame;

I really don't see how that line can cause the view to move around at all, unless the previous center property didn't "fully stick" or something. Furthermore, I don't understand the value of the center property, since it appears to be the last place touched, and not the center of the view at all. Example log:

AnchorPoint: {0.204427, 0.748506}
Center: {625, 218.5}
Frame: {{14, -34}, {768, 1004}}

What I am trying to implement here, in the long run, is a view which can scale, but when scrolled beyond the edges will snap back (like a UIScrollView would).

1

1 Answers

1
votes

If you remove these two lines

    view.layer.anchorPoint = CGPointMake(locationInView.x / view.bounds.size.width, locationInView.y / view.bounds.size.height);
    view.center = locationInSuperview;

Then your view panning behaves as you would expect. Why are you changing the layer's anchorpoint anyway? IN general your code seems over-complicated for what you are trying to achieve. See David Rönnqvist, understanding the anchor point as to how changing the anchorpoint has (often unintended) implications for the view's frame property.

Moreover, it is usually a good idea to leave the top-level view alone (the view controller's self.view) and manipulate subviews. That way you know how everything relates to the viewController's view without wondering what is going on with respect to the UIWindow. When you rotate the device, a rotation transform is applied to that top-level view, which doesn't help when you are inspecting frame data. It's subviews' properties aren't affected by that transform.

Your code behaves as you expect it to in untransformed portrait mode, and only throws up strange results when in landscape or portrait-upsidedown. These are the cases where the view has had a rotational transform applied. Apple says that the frame's origin property is undefined when a transform has been applied to the view, so when you do this:

self.view.frame = self.view.frame

you are assigning a property in an undefined state. Expect the unexpected.

You could change your viewloading code thus:

- (void)viewDidLoad
{
    [super viewDidLoad];

    UIView* subview = [[UIView alloc] initWithFrame:
                       CGRectMake(0, 0
                                  , self.view.bounds.size.width
                                  , self.view.bounds.size.height)];
    [self.view addSubview:subview];
    subview.backgroundColor = UIColor.redColor;
    [subview setAutoresizingMask:UIViewAutoresizingFlexibleWidth 
                                |UIViewAutoresizingFlexibleHeight];
    UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] 
                                     initWithTarget:self 
                                             action:@selector(onPan:)];
    pan.maximumNumberOfTouches = 1;
    [subview addGestureRecognizer:pan];
}

Here we do not override -loadView, leave that alone to create the top-level view automatically, then we implement a subview in viewDidLoad instead. Now your test app behaves correctly.