1
votes

I have a saved set of data captured with a Kinect V2 using the Kinect SDK. Data are in the form of RGB image, depth image, and colored point cloud. I used C# for this.

Now I want to create the point cloud separately using only the saved color and depth images, but in Matlab.

The pcfromkinect Matlab function requires a live Kinect. But I want to generate the point cloud without a connected Kinect.

Any idea please.

I found the following related questions, but none of them have a clear clue.

1

1 Answers

2
votes

I have done the same for my application. So here a brief overview what i have done:

Save the data (C#/Kinect SDK):

How to save a depth Image:

    MultiSourceFrame mSF = (MultiSourceFrame)reference;
    var frame = mSF.DepthFrameReference.AcquireFrame();

    if (frame != null )
    {
        using (KinectBuffer depthBuffer = frame.LockImageBuffer())
        {
            Marshal.Copy(depthBuffer.UnderlyingBuffer, targetBuffer,0, DEPTH_IMAGESIZE);
        }

        frame.Dispose();
 }

write buffer to file:

File.WriteAllBytes(filePath + fileName, targetBuffer.Buffer);

For fast saving think about a ringbuffer.

ReadIn the data (Matlab)

how to get z_data:

fid = fopen(fileNameImage,'r');
img= fread(fid[IMAGE_WIDTH*IMAGE_HEIGHT,1],'uint16');
fclose(fid);
img= reshape(img,IMAGE_WIDTH,MAGE_HEIGHT);

how to get XYZ-Data:

For that think about the pinhole-model-formula to convert uv-coordinates to xyz (formula).

To get the cameramatrix K you need to calibrate your camera (matlab calibration app) or get the cameraparameters from Kinect-SDK (var cI= kinectSensor.CoordinateMapper.GetDepthCameraIntrinsics();).

coordinateMapper with the SDK:

The way to get XYZ from Kinect SDK directly is quite easier. For that this link could help you. Just get the buffer by kinect sdk and convert the rawData with the coordinateMapper to xyz. Afterwards save it to csv or txt, so it is easy for reading in matlab.