I have done the same for my application. So here a brief overview what i have done:
Save the data (C#/Kinect SDK):
How to save a depth Image:
MultiSourceFrame mSF = (MultiSourceFrame)reference;
var frame = mSF.DepthFrameReference.AcquireFrame();
if (frame != null )
{
using (KinectBuffer depthBuffer = frame.LockImageBuffer())
{
Marshal.Copy(depthBuffer.UnderlyingBuffer, targetBuffer,0, DEPTH_IMAGESIZE);
}
frame.Dispose();
}
write buffer to file:
File.WriteAllBytes(filePath + fileName, targetBuffer.Buffer);
For fast saving think about a ringbuffer.
ReadIn the data (Matlab)
how to get z_data:
fid = fopen(fileNameImage,'r');
img= fread(fid[IMAGE_WIDTH*IMAGE_HEIGHT,1],'uint16');
fclose(fid);
img= reshape(img,IMAGE_WIDTH,MAGE_HEIGHT);
how to get XYZ-Data:
For that think about the pinhole-model-formula to convert uv-coordinates to xyz
(formula).
To get the cameramatrix K you need to calibrate your camera (matlab calibration app) or get the cameraparameters from Kinect-SDK (var cI= kinectSensor.CoordinateMapper.GetDepthCameraIntrinsics();
).
coordinateMapper
with the SDK:
The way to get XYZ from Kinect SDK directly is quite easier. For that this link could help you. Just get the buffer by kinect sdk and convert the rawData with the coordinateMapper
to xyz. Afterwards save it to csv or txt, so it is easy for reading in matlab.