2
votes

We are very new to working with the Microsoft Kinect, and are trying to hook it up to our Unity 2D game for a class project. Part of our class project requires that we project the game onto a fabric, and have the user press the fabric in order to interact with the game. The player will tap the fabric in order to collect falling objects. We are trying to use the depth map with the Kinect in order to see where a user has tapped on the screen, and then we can see what object they are interacting with.

We are having trouble finding a way to transform the Kinect depth coordinates to coordinates that work with our game. We are unable to use skeletal tracking due to our set up. Does anyone have any experience with transforming Kinect depth coordinate data to a 2D Unity game without using skeletal functions? Below is a diagram of our set up for clarification.

enter image description here

1
Have you looked at the depth frame data? msdn.microsoft.com/en-us/library/… The DepthImageFrame seems to contain a .Depth member for each pixel it receives. You should take one frame where the user is not touching the fabric as the "zero depth" and then you can calculate the difference for each pixel (or a X by Y set of pixels) to detect "touches".Ron Beyer
Hi Ron, thanks for the response. We have tried that but it still reads "phantom touches" in random places. Our main goal is also to try to convert the Kinect coordinates given to us by the depth function to screen coordinates that we can then use in Unity. Any help would be appreciated!Nada Stars
Do you average values over some number of frames?Ron Beyer
We are averaging them - the unity game still is not able to map the coordinates correctly. We need a way to get the Kinect coordinates into Unity.Nada Stars

1 Answers

0
votes

your problem seems to stem from the fact kinect depth and color streams are not aligned. this means for each depth pixle you get there is a certain offset to account for. therefore you must:

before aquiring the first depth frame , get the coordinate mapper.

KinectSensor Sensor = KinectSensor.GetDefault();
CoordinateMapper mapper = Sensor.CoordinateMapper;

after these guys are ready . acquire your first depth frame. use it with the mapper to populate an array of ColorSpacePoint like so:

ColorSpacePoint[] _colorPoints;
mapper.MapDepthFrameToColorSpace(DepthData, _colorPoints);

this newly populated array will contain your depth data mapped to the color space of the kinect camera. you will begin getting accurate and reliable data about positioning of the depth inside the colorframe and using the image supplied actually calibrate your application.

from there i suppose you could use the ColorSpacePoint to determine where in the image is the touch applied, and debug it visually using the regular color stream.