0
votes

My educational project is about hand gesture recognition using Kinect .I'm using kinect for XBox One & adapter.

I recorded color and depth videos from kinect , using the following code : (Similar to what is said here)

clc
clear all
close all

imaqreset

%Call up dicertory containing utility functions
utilpath = fullfile(matlabroot, 'toolbox', 'imaq', 'imaqdemos', 'html', 'KinectForWindows');
addpath(utilpath);

%Create the videoinput objects for the colour and depth streams
colourVid = videoinput('kinect', 1, 'BGR_1920x1080');
depthVid = videoinput('kinect', 2, 'Depth_512x424');


depthSource = getselectedsource(depthVid);
depthSource.EnableBodyTracking = 'on';

%------------------------------------------------
%setting up record
%------------------------------------------------

% set the data streams to logging mode and to disk
set(colourVid, 'LoggingMode', 'Disk&Memory');
set(depthVid, 'LoggingMode', 'Disk&Memory');

%Set a video timeout property limit to 50 seconds 
set(colourVid, 'Timeout',50);
set(depthVid, 'Timeout',50);

%Creat a VideoReader object
colourLogfile = VideoWriter('colourTrial5.avi', 'Uncompressed AVI');
depthLogfile = VideoWriter('depthTrial5.mj2', 'Archival');

%configure the video input object to use the VideoWriter object
colourVid.DiskLogger = colourLogfile;
depthVid.DiskLogger = depthLogfile;

%set the triggering mode to 'manual'
triggerconfig([colourVid depthVid], 'manual');

%set the FramePerTrigger property of the VIDEOINPUT objects to 100 to
%acquire 100 frames per trigger.
set([colourVid depthVid], 'FramesPerTrigger', 100);


%------------------------------------------------
%Initiating the aquisition
%------------------------------------------------

%Start the colour and depth device. This begins acquisition, but does not
%start logging of acquired data
start([colourVid depthVid]);

pause(20); %allow time for both streams to start

%Trigger the devices to start logging of data.
trigger([colourVid depthVid]);

%Retrieve the acquired data
[colourFrameData, colourTimeData, colourMetaData] = getdata(colourVid);
[depthFrameData, depthTimeData, depthMetaData] = getdata(depthVid);


skeletonData = depthMetaData; %making copy of data 
save('skeletonData.mat','skeletonData')


stop([colourVid depthVid])

But When I playback the videos, it's quite obvious that the two color and depth videos are not exactly recorded at the same time. my current implementation has a lag between the two sensors (of almost 1 second!) It seems that the color sensor has started working, befor the depth sensor , And then, after taking 100 frames, it stops working befor the depth sensor. I would like both sensors to be synchronized such that the video from each sensor is of the same moment.

Why does that happen?? can anyone suggest a solution?

any solutions will be appreciated. thanks.

1
I don't have experience with this hardware, but does the recorded video metadata not have timestamps or fps data? That will help sync them - Ander Biguri

1 Answers

0
votes

Looking at your code, i think it is capturing correctly, but capturing multiple frames epr trigger and playing them back? This could be checked if you save images with timestamps instead of a video directly (to see if there are repeated frames). So you should set the FramePerTrigger as follows:

set([colourVid depthVid], 'FramesPerTrigger', 1);

Alternatively, you could use a small utility I made a while back; but it saves timestamped images. You can modify for saving to a video as per your code attached. Hope this helps.