Unfortunately I'm still struggling a little with the new Kinect SDK 1.7. This problem is actually in the same context as "finding events via reflection c#": the Click event (however, it is not necessary for understanding this problem).
My Problem is simple: if I have my right hand controlling the cursor (the "new" Kinect HandPointer) and it's in the upper left corner of the screen I want it to return the Coordinates (0,0). If the cursor is in the lower right corner the coordinates should be (1920,1080) respectively the current screen resolution.
The new SDK has so-called PhysicalInteractionZones (PIZ) for each HandPointer (up to 4) which move with the HandPointers and have values from (for upper-left) 0.0 to (for lower-right) 1.0. Which basically means, I can't use them for mapping to the screen since they are changing dynamically according to the users movement in front of the Kinect. At least, I was unable to find a way to make that work.
I then tried it via SkeletonStream: the coordinates for the right hand are tracked and as soon as a click gesture is registered, the Click-Event triggers at this specific point. I tried it with the following code:
private void ksensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
using (SkeletonFrame frame = e.OpenSkeletonFrame())
{
if (frame != null)
{
frame.CopySkeletonDataTo(this._FrameSkeletons);
var accelerometerReading =
Settings.Instance.ksensor.AccelerometerGetCurrentReading();
ProcessFrame(frame);
_InteractionStream.ProcessSkeleton(_FrameSkeletons,
accelerometerReading, frame.Timestamp);
}
}
}
private void ProcessFrame(ReplaySkeletonFrame frame)
{
foreach (var skeleton in frame.Skeletons)
{
if (skeleton.TrackingState != SkeletonTrackingState.Tracked)
continue;
foreach (Joint joint in skeleton.Joints)
{
if (joint.TrackingState != JointTrackingState.Tracked)
continue;
if (joint.JointType == JointType.HandRight)
{
_SwipeGestureDetectorRight.Add(joint.Position,
Settings.Instance.ksensor);
_RightHand = GetPosition(joint);
myTextBox.Text = _RightHand.ToString();
}
if (joint.JointType == JointType.HandLeft)
{
_SwipeGestureDetectorLeft.Add(joint.Position,
Settings.Instance.ksensor);
_LeftHand = GetPosition(joint);
}
}
}
}
The auxialiary GetPosition
method is defined as the following:
private Point GetPosition(Joint joint)
{
DepthImagePoint point =
Settings.Instance.ksensor.CoordinateMapper.MapSkeletonPointToDepthPoint(joint.Position, Settings.Instance.ksensor.DepthStream.Format);
point.X *=
(int)Settings.Instance.mainWindow.ActualWidth / Settings.Instance.ksensor.DepthStream.FrameWidth;
point.Y *=
(int)Settings.Instance.mainWindow.ActualHeight / Settings.Instance.ksensor.DepthStream.FrameHeight;
return new Point(point.X, point.Y);
}
As soon as the click gesture is detected a simple invokeClick(_RightHand)
is called and performs the click. The click itself is working perfectly fine (thanks again to the people who answered on that issue). What is not working so far is the mapping of the coordinates since I only get coordinates from
x-Axis: 900 - 1500 (from left to right) y-Axis: 300 - 740 (from top to bottom)
And these coordinates even vary with each time I try to reach one specific point on the screen by 100 or 200 pixels, e.g. the left-hand side of the screen is 900 at first but when I move my hand out of the range of the Kinect (behind my back or under the table) and repeat the movement towards the left-hand side I suddenly get coordinates of 700 or something around that.
I even tried the ScaleTo
methods from Coding4Fun.Kinect.Wpf
(ScaleTo(1920,1080)
respectively ScaleTo(SystemParameters.PrimaryScreenWidth,SystemParameters.PrimaryScreenHight)
) but that just gave me crazy coordinates in the x:300000, y:-100000 or 240000...and I'm running out of ideas so I hope someone out there has one for me or even a solution for this.
Sorry for the long text but I've tried to be as specific as I could be. Thanks in advance for any help!