10
votes

I have recently started working with OpenCV 3.0 and my goal is to capture a pair of stereo images from a set of stereo cameras, create a proper disparity map, convert the disparity map to a 3D point cloud and finally show the resulting point cloud in a point-cloud viewer using PCL.

I have already performed the camera calibration and the resulting calibration RMS is 0.4

You can find my image pairs (Left Image)1 and (Right Image)2 in the links below. I am using StereoSGBM in order to create disparity image. I am also using track-bars to adjust StereoSGBM function parameters in order to obtain better disparity image. Unfortunately I can't post my disparity image since I am new to StackOverflow and don't have enough reputation to post more than two image links!

After getting the disparity image ("disp" in the code below), I use the reprojectImageTo3D() function to convert the disparity image information to XYZ 3D coordinate, and then I convert the results into an array of "pcl::PointXYZRGB" points so they can be shown in a PCL point cloud viewer. After performing the required conversion, what I get as a point cloud is a silly pyramid shape point-cloud which does not make any sense. I have already read and tried all of the suggested methods in the following links:

1- http: //blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html

2- http: //stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud

3- http: //stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv

and non of them worked!!!

Below I provided the conversion portion of my code, it would be greatly appreciated if you could tell me what I am missing:

pcl::PointCloud<pcl::PointXYZRGB>::Ptr pointcloud(new   pcl::PointCloud<pcl::PointXYZRGB>());
    Mat xyz;
    reprojectImageTo3D(disp, xyz, Q, false, CV_32F);
    pointcloud->width = static_cast<uint32_t>(disp.cols);
    pointcloud->height = static_cast<uint32_t>(disp.rows);
    pointcloud->is_dense = false;
    pcl::PointXYZRGB point;
    for (int i = 0; i < disp.rows; ++i)
        {
            uchar* rgb_ptr = Frame_RGBRight.ptr<uchar>(i);
            uchar* disp_ptr = disp.ptr<uchar>(i);
            double* xyz_ptr = xyz.ptr<double>(i);

            for (int j = 0; j < disp.cols; ++j)
            {
                uchar d = disp_ptr[j];
                if (d == 0) continue;
                Point3f p = xyz.at<Point3f>(i, j);

                point.z = p.z;   // I have also tried p.z/16
                point.x = p.x;
                point.y = p.y;

                point.b = rgb_ptr[3 * j];
                point.g = rgb_ptr[3 * j + 1];
                point.r = rgb_ptr[3 * j + 2];
                pointcloud->points.push_back(point);
            }
        }
    viewer.showCloud(pointcloud);
1
please check the images you have provided, they look the samealexisrozhkov
Sorry, My bad. I have uploaded the right frames!emasnavi

1 Answers

17
votes

After doing some work and some research I found my answer and I am sharing it here so other readers can use.

Nothing was wrong with the conversion algorithm from the disparity image to 3D XYZ (and eventually to a point cloud). The problem was the distance of the objects (that I was taking pictures of) to the cameras and amount of information that was available for the StereoBM or StereoSGBM algorithms to detect similarities between the two images (image pair). In order to get proper 3D point cloud it is required to have a good disparity image and in order to have a good disparity image (assuming you have performed good calibration) make sure of the followings:

1- There should be enough detectable and distinguishable common features available between the two frames (right and left frame). The reason being is that StereoBM or StereoSGBM algorithms look for common features between the two frames and they can easily be fooled by similar things in the two frames which may not necessarily belong to the same objects. I personally think these two matching algorithms have lots of room for improvement. So beware of what you are looking at with your cameras.

2- Objects of interest (the ones that you are interested to have their 3D point cloud model) should be within a certain distance to your cameras. The bigger the base-line is (base line is the distance between the two cameras), the further your objects of interest (targets) can be.

A noisy and distorted disparity image never generates a good 3D point cloud. One thing you can do to improve your disparity images is to use track-bars in your applications so you can adjust the StereoSBM or StereoSGBM parameters until you can see good results (clear and smooth disparity image). Code below is a small and simple example on how to generate track-bars (I wrote it as simple as possible). Use as required:

 int PreFilterType = 0, PreFilterCap = 0, MinDisparity = 0, UniqnessRatio = 0, TextureThreshold = 0,
    SpeckleRange = 0, SADWindowSize = 5, SpackleWindowSize = 0, numDisparities = 0, numDisparities2 = 0, PreFilterSize = 5;


            Ptr<StereoBM> sbm = StereoBM::create(numDisparities, SADWindowSize);  

while(1)
{
            sbm->setPreFilterType(PreFilterType);
            sbm->setPreFilterSize(PreFilterSize);  
            sbm->setPreFilterCap(PreFilterCap + 1);
            sbm->setMinDisparity(MinDisparity-100);
            sbm->setTextureThreshold(TextureThreshold*0.0001);
            sbm->setSpeckleRange(SpeckleRange);
            sbm->setSpeckleWindowSize(SpackleWindowSize);
            sbm->setUniquenessRatio(0.01*UniqnessRatio);
            sbm->setSmallerBlockSize(15);
            sbm->setDisp12MaxDiff(32);

            namedWindow("Track Bar Window", CV_WINDOW_NORMAL);
            cvCreateTrackbar("Number of Disparities", "Track Bar Window", &PreFilterType, 1, 0);
            cvCreateTrackbar("Pre Filter Size", "Track Bar Window", &PreFilterSize, 100);
            cvCreateTrackbar("Pre Filter Cap", "Track Bar Window", &PreFilterCap, 61);
            cvCreateTrackbar("Minimum Disparity", "Track Bar Window", &MinDisparity, 200);
            cvCreateTrackbar("Uniqueness Ratio", "Track Bar Window", &UniqnessRatio, 2500);
            cvCreateTrackbar("Texture Threshold", "Track Bar Window", &TextureThreshold, 10000);
            cvCreateTrackbar("Speckle Range", "Track Bar Window", &SpeckleRange, 500);
            cvCreateTrackbar("Block Size", "Track Bar Window", &SADWindowSize, 100);
            cvCreateTrackbar("Speckle Window Size", "Track Bar Window", &SpackleWindowSize, 200);
            cvCreateTrackbar("Number of Disparity", "Track Bar Window", &numDisparities, 500);

            if (PreFilterSize % 2 == 0)
            {
                PreFilterSize = PreFilterSize + 1;
            }


            if (PreFilterSize2 < 5)
            {
                PreFilterSize = 5;
            }

            if (SADWindowSize % 2 == 0)
            {
                SADWindowSize = SADWindowSize + 1;
            }

            if (SADWindowSize < 5)
            {
                SADWindowSize = 5;
            }


            if (numDisparities % 16 != 0)
            {
                numDisparities = numDisparities + (16 - numDisparities % 16);
            }
        }
}

If you are not getting proper results and smooth disparity image, don't get disappointed. Try using the OpenCV sample images (the one with an orange desk lamp in it) with your algorithm to make sure you have the correct pipe-line and then try taking pictures from different distances and play with StereoBM/StereoSGBM parameters until you can get something useful. I used my own face for this purpose and since I had a very small baseline, I came very close to my cameras (Here is a link to my 3D face point-cloud picture, and hey, don't you dare laughing!!!)1.I was very happy of seeing myself in 3D point-cloud form after a week of struggling. I have never been this happy of seeing myself before!!! ;)