3
votes

We get timestamps as a double value for pose, picture, and point data - they aren't always aligned - how do I calculate the temporal distance between two time stamps ? Yes, I know how to subtract two doubles, but I'm not at all sure of how the delta corresponds to time.

2

2 Answers

2
votes

I have some interesting timestamp data that sheds light on your question, without exactly answering it. I have been trying to match up depth frames with image frames - just as a lot of people posting under this Tango tag. My data did not match exactly and I thought there were problems with my projection matrices and point reprojection. Then I checked the timestamps on my depth frames and image frames and found that they were off by as much as 130 milliseconds. A lot! Even though I was getting the most recent image whenever a depth frame was available. So I went back to test just the timestamp data.

I am working in Native with code based on the point-cloud-jni-example. For each of onXYZijAvailable(), onFrameAvailable(), and onPoseAvailable() I am dumping out time information. In the XYZ and Frame cases I am copying the returned data to a static buffer for later use. For this test I am ignoring the buffered image frame, and the XYZ depth data is displayed in the normal OpenGL display loop of the example code. The data captured looks like this:

                        callback type : systime  : timestamp : last pose
I/tango_jni_example( 3247): TM CLK Img  5.420798  110.914437  110.845522
I/tango_jni_example( 3247): TM CLK XYZ  5.448181  110.792470  110.845522
I/tango_jni_example( 3247): TM CLK Pose  5.454577  110.878850
I/tango_jni_example( 3247): TM CLK Img  5.458924  110.947708  110.878850
I/tango_jni_example( 3247): TM CLK Pose  5.468766  110.912178

The system time is from std::chrono::system_clock::now() run inside of each callback. (Offset by a start time at app start.) The timestamp is the actual timestamp data from the XYZij, image, or pose struct. For depth and image I also list the most recent pose timestamp (from start-of-service to device, with given time of 0.0). A quick analysis of about 2 minutes of sample data leads to the following initial conclusions:

Pose data is captured at VERY regular intervals of 0.033328 seconds.
Depth data is captured at pretty regular intervals of 0.2 seconds.
Image data is captured at odd intervals 
    with 3 or 4 frames at 0.033 seconds
    then 1 frame at about 0.100 seconds
    often followed by a second frame with the same timestamp
    (even though it is not reported until the next onFrameAvailable()?)

That is the actual timestamp data in the returned structs. The "real?" elapsed time between callbacks is much more variable. The pose callback fires anywhere from 0.010 to 0.079 seconds, even though the pose timestamps are rock solid at 0.033. The image (frame) callback fires 4 times at between 0.025 and 0.040 and then gives one long pause of around 0.065. That is where two images with the same timestamp are returned in successive calls. It appears that the camera is skipping a frame?

So, to match depth, image, and pose you really need to buffer multiple returns with their corresponding timestamps (ring buffer?) and then match them up by whichever value you want as master. Pose times are the most stable.

Note: I have not tried to get a pose for a particular "in between" time to see if the returned pose is interpolated between the values given by onPoseAvailable().

I have the logcat file and various awk extracts available. I am not sure how to post those (1000's of lines).

0
votes

I think the fundamental question would be how to sync the pose, depth and color image data together into a single frame. So to answer that, there are actually two step

  • Sync pose to either color image or depth: to do that, the simplest way is to use the TangoService_getPoseAtTime function, that basically gives you the ability to query a pose with certain timestamp. i.e, you have a depth point cloud available, and it gives you a timestamp of that depth frame, then you could use the depth point cloud timestamp to query the corresponding pose.
  • Sync color image and depth image: currently, you would have to buffer either the depth point cloud or the color image at the application level, and base on one of their's timestamp, query the other's data in the buffer. There is a field name color_image in the TangoXYZij data structure, and the comment says it's reserved for future use, so the built-in sync up feature might be coming in future releases.