I am making an application that will somewhat work like the Kinect's WebServer WPF sample. I am currently trying to map hand positions to screen coordinates so they can work like cursors. Seems all fine and dandy, so let's look at the InteractionHandPointer documentation:
Gets interaction-adjusted X-coordinate of hand pointer position. 0.0 corresponds to left edge of interaction region and 1.0 corresponds to right edge of interaction region, but values could be outside of this range.
And the same goes for Y. Wow, sounds good. If it's a value between 0 and 1 I can just multiply it by the screen resolution and get my coordinates, right?
Well, turns out it regularly returns values outside that range, going as low as -3 and as high as 4. I also checked out SkeletonPoint, but the usage of meters as scale makes it even harder to use reliably.
So, has anyone had any luck using the InteractionHandPointer? If so, what kind of adjustments did you do?
Best Regards, João Fernandes