I am currently looking for a stereoscopic camera for a project and the Kinect v2 seems to be a good option. However, since it's quite an investment to me, I need to be sure it meets my requirements, the main one being a good synchronization of the different sensors.
Apparently there is no hardware synchronization of the sensors, and I get many versions about the software part:
Some posts where people complain about lag between the 2 sensors, and many others asking for a way to synchronize the sensors. Both seem to have strange workarounds and no "official", common solution emerges from the answers.
Some posts about a
MultiSourceFrameclass, which is part of Kinect SDK 2.0. From what I understand, this class enables you to retrieve the frame of all the sensors (or less, you can choose which sensors you want to get the data from) at a given time. Thus, you should be able, for a given instant t, to get the output of the different sensors and make sure these outputs are synchronized.
So my question is, is this MultiSourceFrame class doing exactly what I mean it does? And if yes, why is it never proposed as a solution? It seems the posts of the 1st category are from 2013, so before the release of the SDK 2.0. However, MultiSourceFrame class is supposed to replace the AllFramesReady event of the previous versions of the SDK, and AllFramesReady wasn't suggested as a solution either.
Unfortunately the documentation doesn't provide much information about how it works, so I'm asking here in case someone would have already used it. I'm sorry if my question seems stupid, but I would like to be sure before purchasing such a camera.
Thank you for your answers! And feel free to ask for more details if needed :)