1
votes

I am working on a cinder and kinect v2 app and currently I am stuck on mapping a color point to a depth point. I searched through a color frame for a specific point of a certain color so I have the Color Frame Point x and y.

I would like to get the depth from this, but of course the depth frame is a different resolution and viewpoint so you can't just index into it.

I couldn't find any mapper for color point to depth point or even camera point. Is there a simple way of doing this other than taking the measurements yourself?

My problem is similar to this one: How to get real world coordinates (x, y, z) from a distinct object using a Kinect, but I don't need the actual real world coordinates. However, the answer there doesn't fully explain do what I need.

2

2 Answers

2
votes

You have to use the CoordinateMapperClass. For your case:
Use the function

 public void MapColorFrameToDepthSpace (
         Array<UInt16>[] depthFrameData,
         out Array<DepthSpacePoint>[] depthSpacePoints
)
1
votes

You must use CoordinateMapper of kinect sensor. Your mapping is related with MapColorFrameToDepthFrame or space. This function avaible on kinect v2 sdk.

After this step , you will have 3 info (Xview,Yview,Zworld) Yworld and Xworld you can find these formulas

Yworld =  Zworld * Yview / focal_length 
Xworld = Zworld  * Xview / focal_length

and now you know any image point's world coordinate

You can easily measure distance of between two point.

Distance_of_p1_p2(mm) = Sqrt ( Square(p2.Xworld - p1.Xworld) +  Square(p2.Yworld - p1.Yworld) + Square(p2.Zworld - p1.Zworld))