2
votes

currently I working with IP camera from titathink (TT522PW) which deliver a stream of 1280 * 720 with 30 FPS with normal sensors (not a model with a high sensitivity low light)

when capturing the video stream we see on the frames a fisheye type distortion.

non-rectified images

non-rectified images

  1. I first calibrated each camera individually to remove the distortion (in calibration I get an rms error of left camera rms_left = 0.166 and RMS of right camera rms_right = 0.162). And then using the xml files resulting from the process of individual calibration cameras, I calibrated the stereo cameras, in the stereo calibration I get the RMS error = 0.207

  2. And by displaying the calibrated images we see that the stereo calibration was done well

The calibrated images

the calibrated images

Rectified image with horizontal lines

rectified image with horizontal lines

  1. I took over the functions of dji for calculating the disparity map as well as that for the calculation of the points cloud

the code of calculation and filtering disparity map

bool Disparity_filter::initDispParam(){


#ifdef USE_CUDA
  block_matcher_ = cv::cuda::createStereoBM(num_disp_, block_size_);
#else
  block_matcher_ = cv::StereoBM::create(num_disp_, block_size_);
#endif

#ifdef USE_OPEN_CV_CONTRIB
  wls_filter_ = cv::ximgproc::createDisparityWLSFilter(block_matcher_); // left_matcher
  wls_filter_->setLambda(8000.0);
  wls_filter_->setSigmaColor(1.5);

  right_matcher_ = cv::ximgproc::createRightMatcher(block_matcher_);
#endif
   return true;
}


void Disparity_filter::computeDisparityMap(std::shared_ptr<Frame> framel, std::shared_ptr<Frame> framer){
framel->raw_disparity_map_=cv::Mat(HEIGHT, WIDTH, CV_16SC1); 
#ifdef USE_CUDA
    cv::cuda::GpuMat cuda_disp_left;
    framel->cuda_crop_left.upload(framel->cpu_crop_left);
    framer->cuda_crop_right.upload(framer->cpu_crop_right);

    // GPU implementation of stereoBM outputs uint8_t, i.e. CV_8U
    block_matcher_->compute(framel->cuda_crop_left.clone(),
                          framer->cuda_crop_right.clone(),
                          cuda_disp_left);
    cuda_disp_left.download(framel->raw_disparity_map_);

    framel->raw_disparity_map_.convertTo(framel->disparity_map_8u_, CV_8UC1, 1);

    // convert it from CV_8U to CV_16U for unified
    // calculation in filterDisparityMap() & unprojectPtCloud()
    framel->raw_disparity_map_.convertTo(framel->raw_disparity_map_, CV_16S, 16);
#else

    // CPU implementation of stereoBM outputs short int, i.e. CV_16S

    cv::Mat left_for_matcher ,right_for_matcher;

   left_for_matcher  = framel->cpu_crop_left.clone();
   right_for_matcher = framer->cpu_crop_right.clone();      
    cv::cvtColor(left_for_matcher,  left_for_matcher,  cv::COLOR_BGR2GRAY);
    cv::cvtColor(right_for_matcher, right_for_matcher, cv::COLOR_BGR2GRAY);

    block_matcher_->compute(left_for_matcher, right_for_matcher, framel->raw_disparity_map_);
    framel->raw_disparity_map_.convertTo(framel->disparity_map_8u_, CV_8UC1, 0.0625);
#endif 

} 


void Disparity_filter::filterDisparityMap(std::shared_ptr<Frame> framel, std::shared_ptr<Frame> framer){

    right_matcher_->compute(framer->cpu_crop_right.clone(),
                                                    framel->cpu_crop_left.clone(),
                                                    raw_right_disparity_map_);

  // Only takes CV_16S type cv::Mat
  wls_filter_->filter(framel->raw_disparity_map_,
                      framel->cpu_crop_left,
                      filtered_disparity_map_,
                      raw_right_disparity_map_);

  filtered_disparity_map_.convertTo(framel->filtered_disparity_map_8u_, CV_8UC1, 0.0625); 

}

the code of calculation point cloud


bool PointCloud::initPointCloud(){
    std::string stereo_c2="../calibration/sterolast.xml";  //calib_stereo.xml"; //
  cv::FileStorage ts(stereo_c2,cv::FileStorage::READ);
  if (!ts.isOpened()) {
    std::cerr << "Failed to open calibration parameter file." << std::endl;
    return false;
    }
    cv::Mat P1,P2;
    ts["P1"] >> param_proj_left_;
    ts["P2"] >> param_proj_right_;

  principal_x_ = param_proj_left_.at<double>(0, 2);
  principal_y_ = param_proj_left_.at<double>(1, 2);
  fx_ = param_proj_left_.at<double>(0, 0);
  fy_ = param_proj_left_.at<double>(1, 1);
  baseline_x_fx_ = -param_proj_right_.at<double>(0, 3);
  std::cout<<"** principal_x= " << principal_x_ <<"  ** principal_y= " << principal_y_  <<"  ** fx= " << fx_ <<"  ** fy= " << fy_<<"  ** baseline_x_fx=  " << baseline_x_fx_<<std::endl<< std::flush;
  return true;



    }
void PointCloud::unprojectPtCloud(std::shared_ptr<Frame> framel)
{
  // due to rectification, the image boarder are blank
  // we cut them out
  int border_size = num_disp_;
  const int trunc_img_width_end = HEIGHT - border_size;
  const int trunc_img_height_end = WIDTH - border_size;

   mat_vec3_pt_ = cv::Mat_<cv::Vec3f>(HEIGHT, WIDTH, cv::Vec3f(0, 0, 0));
     cv::Mat color_mat_(HEIGHT, WIDTH, CV_8UC1, &color_buffer_[0])  ;
  for(int v = border_size; v < trunc_img_height_end; ++v)
  {
    for(int u = border_size; u < trunc_img_width_end; ++u)
    {
      cv::Vec3f &point = mat_vec3_pt_.at<cv::Vec3f>(v, u);

#ifdef USE_OPEN_CV_CONTRIB
      float disparity = (float)(framel->raw_disparity_map_.at<short int>(v, u)*0.0625);
#else
      float disparity = (float)(framel->raw_disparity_map_.at<short int>(v, u)*0.0625);
#endif
            //std::cout<<"** disparity " << disparity << std::endl<< std::flush;
      // do not consider pts that are farther than 8.6m, i.e. disparity < 6
      if(disparity >= 60)
      {
        point[2] = baseline_x_fx_/disparity;
        point[0] = (u-principal_x_)*point[2]/fx_;
        point[1] = (v-principal_y_)*point[2]/fy_;
      }
      color_buffer_[v*WIDTH+u] = framel->cpu_crop_left.at<uint8_t>(v, u);
    }
  }

  color_mat_ = cv::Mat(HEIGHT, WIDTH, CV_8UC1, &color_buffer_[0]).clone();
    framel->mat_vec3=mat_vec3_pt_;
    framel->color_m=color_mat_;
    pt_cloud_ = cv::viz::WCloud(mat_vec3_pt_, color_mat_);
}

when I calculate the disparity map and filter it, I get the map not clear at 100% (we see areas that change light intensity despite the position of the cameras and obstacles are fixed in the stream, not very very clear but acceptable) you can see a little video in which I tested it with the calibration file of RMS = 0.2.

Test of stereo vision- disparity map

test of stereo vision- disparity map

point cloud

point cloud result

THE QUESTIONS

  • is the stereo calibration that I performed with an error of RMS=0.20 is enough to get a clear disparity map and complete cloud point of the field of view of the two cameras?

  • How to obtain a stable and clean disparity map and clean DEPTH MAP?

thanks for any help :)

1

1 Answers

1
votes

How to obtain a stable and clean disparity map and clean DEPTH MAP?

To answer this question, I looked at the video you shared. The filtered disparity map looks good. The WLS filter you used gives a disparity map just like that. There is nothing wrong with it. But usually, for point clouds, it is not recommended to give the filtered disparity map as input. This is because filter tends to fill up holes which are not found by the algorithm. In other words, they are unreliable data. So try giving unfiltered disparity map as input to pointcloud.

Also, the viewer you are using to view the point cloud, i.e., meshlab often tends to eat up some points. So you can use other viewers such as CloudCompare.

is the stereo calibration that I performed with an error of RMS=0.20 is enough to get a clear disparity map and complete cloud point of the field of view of the two cameras?

Yes, for most of the cases, 0.20 RMS error is good enough. But again, smaller the better.