i'm trying to map my OpenNI (1.5.4.0) Kinect 4 Windows Depthmap to a OpenCV RGB image.
i have the Depthmap 640x480 with depth in mm an was trying to do the mapping like Burrus: http://burrus.name/index.php/Research/KinectCalibration
i skipped the distortion part but otherwise i did everything i think:
//with depth camera intrinsics, each pixel (x_d,y_d) of depth camera can be projected
//to metric 3D space. with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.
P3D.at<Vec3f>(y,x)[0] = (x - cx_ir) * depth/fx_ir;
P3D.at<Vec3f>(y,x)[1] = (y - cy_ir) * depth/fy_ir;
P3D.at<Vec3f>(y,x)[2] = depth;
//P3D' = R.P3D + T:
RTMat = (Mat_<float>(4,4) << 0.999388, -0.00796202, -0.0480646, -3.96963,
0.00612322, 0.9993536, 0.0337474, -22.8512,
0.0244427, -0.03635059, 0.999173, -15.6307,
0,0,0,1);
perspectiveTransform(P3D, P3DS, RTMat);
//reproject each 3D point on the color image and get its color:
depth = P3DS.at<Vec3f>(y,x)[2];
x_rgb = (P3DS.at<Vec3f>(y,x)[0] * fx_rgb/ depth + cx_rgb;
y_rgb = (P3DS.at<Vec3f>(y,x)[1] * fy_rgb/ depth + cy_rgb;
But with my estimated calibration values for the RGB Camera and the IR Camera of the Kinect my result fails in every direction and cannot be fixed only with changing the extrinsic T Parameters.
I have a few suspisions:
- does OpenNi already map the IR Depthmap to the RGB Camera of the Kinect?
- Should i use depth in meters and or transform the pixels into mm? (i tried by multiplying with pixel_size * 0.001 but i got the same results)
Really hope someone can help me. Thx in advance.