I'm playing around with using Kinect for mapping skeletons and can see the device supports up-to 4 sensors connected simultaneously.
However unfortunately I only have 1 sensor at my disposal at the moment and as a result I am unsure about behavior of the SDK in the event you have more than one sensor connected.
Specifically is the data merged in the exposed API? Say you are using the approach of handling the
private void Kinect_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
}
event does the SkeletonFrame.SkeletonArrayLength
increase to 12, 18, 24?
How do I access the different ColorImageFrame
or DepthImageFrame
for each sensor? Normally you might do something like this
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
//Write pixels
}
to access the camera but I don't see any obvious method for accessing data specific to a device.
An explanation of the above and guidance on what - if any - other differences are important to understand when building applications that utilize multiple Kinect sensors concurrently would be much appreciated.