0
votes

I have to measure the Z-distances for corresponding points of two clouds. I intend to iterate through one cloud and calculate the distance bezween Z coordinates using the same X and Y of the other cloud. Unfortunatelly it doesn't work, as there are never a point at these X-Y coordinates in the second cloud. My current workaround is to search for a closest point in the second cloud for X-Y of the first cloud. It works, but it is very slow.

Is there a way to align points of X and Y coordinates on a defined grid using PCL? This way I hope the X-Y coortinates will match better.

EDIT Ok, here are some images and more explanation.

enter image description here Top view

enter image description here Side view

There is a scan of a saddle and a horse back. Both are made independently but aligned in Z-axis - Z-Axis of both are parralel. I want to create a model of a layer, which fits exactly under the saddle (Not just a rechtangular pad).

So given a thickness of the layer I want to iterate through the saddle points and find the Z-distance to corresponding point on the horse-back. As the Y coordinates are floats, there are nearly never a point on the horse with the same XY as on the saddle.

I think. If I could align all points to a grid with a given density, there would be a corresponding XY-point on tthe horse for each XY saddle point above it.

2
What kind of data do you have? What are corresponding points: Points at the same pixel position in the depth maps? Or nearest neighbours (non-unique)? Or point pairs minimizing the squared sum of distances?Simson
@Simson: I've added some info and images. I hope it's more clear now. Thank you for asking!Valentin Heinitz

2 Answers

1
votes

I am not really sure if that is what you mean, but maybe the "grid" you are talking about could just be the image plane? So instead of using the 3D point cloud you could take the depth maps/depth images and just compare the values of two depth maps at the same image coordinates. This would assume that the recordings are already aligned.

If you only have the point cloud data you'd have to perform a projection on the plane (for this you's have to know the intrinsics of the camera).

Another option might be aligning the clouds using a registration method (e.g. ICP). Then you could also get the (sum of) distance(s) for corresponding points of the clouds.

0
votes

I've implemented a proof of concept and want to share it. However, I'd appreciate a "proper" solution - a PCL API function probably.

bool alignToGrid( pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud, QMap<QString, float > & grid, int density )
{
    pcl::PointXYZRGBNormal p1;
    p1.r=0;
    p1.g=0;
    p1.b=255;
    QMap<QString, QList<float> > tmpGridMap;
    for( std::vector<pcl::PointXYZRGBNormal, Eigen::aligned_allocator<pcl::PointXYZRGBNormal> >::iterator it1 = cloud->points.begin();
         it1 != cloud->points.end(); it1++ )
    {
        p1.x = it1->x;
        p1.y = it1->y;
        p1.z = it1->z;
        int gridx = p1.x*density;
        int gridy = p1.y*density;
        QString pos = QString("%1x%2").arg(gridx).arg(gridy);
        tmpGridMap[pos].append(p1.z);
    }

    for (QMap<QString, QList<float> >::iterator it =  tmpGridMap.begin(); it!=tmpGridMap.end(); ++it)
    {
        float meanZ=0;

        foreach( float f, it.value() )
        {
            meanZ+=f;
        }
        meanZ /= it.value().size();


        grid[it.key()] = meanZ;
    }
    return true;
}

The Idea is to iterate through a cloud and leave/create only points, which XY coordinates are on the defined grid. Density 1000 for Kinect clouds results in ca. 1mm-grid. All points around the grid point are used for building the Z-average. The cloud remains unmodified. The output is a map of xy-position to Z. XY Position is stored in string (weird, I know) as x. Using this map it is easy to find corresponding XY-points in other grid-aligned clouds.

Now I was able to map my clouds using any density. In the images e.g. 1mm and 1cm.

enter image description here

enter image description here