0
votes

I'm using Kinect v2 and Kinect SDK v2.

I have couple of questions about coordinate mapping:

  1. How to transfer a camera space point (point in 3d coordinate system) to depth space with depth value?

    Current MapCameraPointToDepthSpace method can only return the depth space coordinate.

    But without depth value, this method is useless.

    Did anyone know how to get the depth value?

  2. How to get the color camera intrinsic?

    There is only a GetDepthCameraIntrinsics methos to get depth camera intrinsic.

    But how about color camera?

  3. How to use the depth camera intrinsic?

    Seems that the Kinect 2 consider the radial distortion.

    But how to use these intrinsic to do the transformation between depth pixel and 3d point?

    Is there any example code can do this?

2

2 Answers

1
votes

Regarding 1: The depth value of your remapped world coordinate is the same as in the original world coordinate's Z value. Read the description of the depth buffer and world coordinate space: this value in both is simply the distance from the point to Kinect's plane, in meters. Now, if you want the depth value of the object being seen on the depth frame directly behind your remapped coordinate, you have to read the depth image buffer in that position.

Regarding 3: You use the camera's intrinsic when you have to manually construct a CoordinateMapper object (i.e. when you don't have a Kinect available). When you get the CoordinateMapper associated to a Kinect (using Kinect object's CoordinateMapper property), it already contains that Kinect's intrinsics... that's why you have a GetDepthCameraIntrinsics method which returns that specific Kinect's intrinsics (they can vary from device to device).

0
votes

Regarding 2: There is now way to get the color camera intrinsic. You have to evaluate them by camera calibration.