1
votes

I am making an application that will somewhat work like the Kinect's WebServer WPF sample. I am currently trying to map hand positions to screen coordinates so they can work like cursors. Seems all fine and dandy, so let's look at the InteractionHandPointer documentation:

Gets interaction-adjusted X-coordinate of hand pointer position. 0.0 corresponds to left edge of interaction region and 1.0 corresponds to right edge of interaction region, but values could be outside of this range.

And the same goes for Y. Wow, sounds good. If it's a value between 0 and 1 I can just multiply it by the screen resolution and get my coordinates, right?

Well, turns out it regularly returns values outside that range, going as low as -3 and as high as 4. I also checked out SkeletonPoint, but the usage of meters as scale makes it even harder to use reliably.

So, has anyone had any luck using the InteractionHandPointer? If so, what kind of adjustments did you do?

Best Regards, João Fernandes

2

2 Answers

1
votes

The interaction zone is an area for each hand where the users can comfortably interact. When the value is lower than 0 or greater than 1, the hand of the user is outside the interaction region and you should ignore the movement.

0
votes

To those wondering, like kallocain said, if the value is greater than 1 or lower than 0 then tha hand of the user is outside the interaction region. The fact that the boundaries of this region aren't configurable is quite the bother.

When the values do go outside these values you can indeed choose to ignore them. Instead of doing that, I bounded them to the region in this manner:

Math.Max(0, Math.Min(hand.x, 1))

I hope this helps someone someday.