0
votes

I am trying to use Project Tango C API, but the application crashed with no error if number of point cloud are more than ~6.5k (after some testing) with the following code

    int width = mImageSource->getDepthImageSize().x;
    int height = mImageSource->getDepthImageSize().y;
    double fx = mImageSource->calib.intrinsics_d.projectionParamsSimple.fx;
    double fy = mImageSource->calib.intrinsics_d.projectionParamsSimple.fy;
    double cx = mImageSource->calib.intrinsics_d.projectionParamsSimple.px;
    double cy = mImageSource->calib.intrinsics_d.projectionParamsSimple.py;

    memset(inputRawDepthImage->GetData(MEMORYDEVICE_CPU), -1, sizeof(short)*width*height);
    for (int i = 0; i < XYZ_ij->xyz_count; i++) {
        float X = XYZ_ij->xyz[i*3][0];
        float Y = XYZ_ij->xyz[i*3][1];
        float Z = XYZ_ij->xyz[i*3][2];
        if (Z < EPSILON || (X < EPSILON && -X < EPSILON) || (Y < EPSILON && -Y < EPSILON) || X != X || Y != Y || Z != Z)
            continue;
        int x_2d = (int)(fx*X/Z+cx);
        int y_2d = (int)(fy*Y/Z+cy);
        if (x_2d >=0 && x_2d < width && y_2d >= 0 && y_2d < height && (x_2d != 0 || x_2d != 0)) {
            inputRawDepthImage->GetData(MEMORYDEVICE_CPU)[x_2d + y_2d*width] = (short) (Z*1000);
        } else {
            continue;
        }
    }

However, if I use for (int i = 0; i < XYZ_ij->xyz_count && i < 6500; i++) everything works fine. I am just wondering if there is an upper bound for access point cloud with C API or I did something wrong?

(width is 320, height is 180, and other intrinsics are loaded from Tango API)

In addition, Google mentioned to use nearest- neighbor filter to get dense depth map in bottom of this page, is there an interface in Tango API for this? Or would anyone suggest an open source implementation for it.

I am also wondering if there is anyway to "pull" colored image(1280x720) in onXYZijAvailable because I need a dense synchronized colored point cloud. Do I need to apply external matrix to align both coordinate frame, or I only need to subsample color image (assume their coordinate system are the same)?

Thank you for any advice!

1

1 Answers

0
votes

In your code that looks up the depth sample coordinates...

for (int i = 0; i < XYZ_ij->xyz_count; i++) {
    float X = XYZ_ij->xyz[i*3][0];
    float Y = XYZ_ij->xyz[i*3][1];
    float Z = XYZ_ij->xyz[i*3][2];

...you should be using an index of i, not i*3. It is a 2D array so you don't have to manage the stride for the higher dimension yourself.

The SDK does not provide a call to fill in locations with no depth samples, probably because there are many approaches with different tradeoffs. The Wikipedia page on nearest neighbor search is a reasonable place to start. There is an interface to FLANN in OpenCV.

The SDK will only deliver the latest color image to you. If you want a prior image (e.g. with a timestamp close to your depth samples) you will have to manage that yourself. Because you can never get a color image at exactly the same timestamp with your depth samples (as the same camera is used in different modes for both), you theoretically should apply the extrinsic pose to align them. In practice if the motion is small over the 0.5 frame time or less between the timestamps, I think most people are going to ignore it.