For my application, I analyzed the spatial resolution of the Kinect v2.
To analyze the spatial resolution, I recorded a perpendicular and planar plane to a given distance and converted the depth map of the plane to a point cloud. Then I compare a point to his neighbors by calculating the euclidian distance.
Calculating the euclidian distance for this case (1 meter between plane and kinect) the resolution is close to 3 mm between the points. For a plane with 2 meters distance I got a resolution up to 3 mm.
Comparing this to the literature, I think my results are quite bad.
For example Yang et al. got for a plane with distance of 4 meters to the kinect a mean resolution of 4mm (Evaluating and Improving the Depth Accuracy of Kinect for Windows v2)
Here a example for my point cloud of the planar plane (2 meter distance to my kinect):
Anyone made some observation regarding to spatial resolution of the Kinect v2 or an idea why my resolution is bad like that?
In my opinion i think there went something wrong when converting my depth image to world coordinates. Therefore here a code snipped:
%normalize image points by multiply inverse of K
u_n=(u(:)-c_x)/f_x;
v_n=(v(:)-c_y)/f_y;
% u,v are uv-coordinates of my depth image
%calc radial distortion
r=sqrt(power(u_n,2)+power(v_n,2));
radial_distortion =1.0 + radial2nd * power(r,2) + radial4nd * power(r,4) + radial6nd * power(r,6);
%apply radial distortion to uv-coordinates
u_dis=u_n(:).*radial_distortion;
v_dis=v_n(:).*radial_distortion;
%apply cameramatrix to get undistorted depth point
x_depth=u_dis*f_x+c_x;
y_depth=v_dis*f_y+c_y;
%convert 2D to 3D
X=((x_depth(:)-c_x).*d(:))./f_x;
Y=((y_depth(:)-c_y).*d(:))./f_y;
Z=d; % d is the given depth value at (u,v)
EDIT: So far I also tried to go include the points directly from coordinate mapper
without further calibration steps.
The results regarding to the resolution are still the same. Has anyone any comparing results?