2
votes

As if i have two web camera,mark as cam1,cam2.And i want to calibrate them to get the transformation between them.

I used cv::stereoCalibrate() to calibrate.

After i got the transformation from cam1 to cam2,mark as R,T.I want to check the accuracy of the calibration result.

So i used cam1 and cam2 to take a picture of a chessboard,mark as pic1,pic2.I got the cam1's extrinsic parameters by cv::solvePnP().And i drew the cam1's world coordinate system by cv::projectPoints() in pic1.

Then,i think the cam2's rotation matrix=cam1's rotation matrix * R.And the cam2's translation matrix=cam1's translation matrix + T.

I calculated the cam2's extrinsic parameters by the above thought.And also drew the cam2's world coordinate system by cv::projectPoints() in pic2.

But the pic2's origin was not in right position.

Here is part of the code i used.

void check_res(const vector<string> &imgs_nm,const Mat &R,const Mat &T,const Mat &cam_c,const Mat &cam_h,const Mat &dist_c,const Mat &dist_h)
{
    int imgs_cnt=imgs_nm.size()/2;
    vector<Point3f> obj_pts;
    for(int i=0;i<boardDimensions.height;i++)
        for(int j=0;j<boardDimensions.width;j++)
            obj_pts.push_back(Point3f(i*CHESS_LEN,j*CHESS_LEN,0.f));
    for(int i=0;i<imgs_cnt;i++)
    {
        vector<Point2f> c_cners,h_cners;
        Mat imgc_gray,imgh_gray;
        Mat imgc=imread(imgs_nm[i*2],1);
        Mat imgc_rz=imgc.clone();

        bool c_found,h_found;
        c_found=HasChessBoard(imgc_rz,imgc_gray,c_cners);
        if(c_found)
            cv::cornerSubPix(imgc_gray, c_cners, cv::Size(11, 11), cv::Size(-1, -1),cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
        Mat imgh=imread(imgs_nm[i*2+1],1);
        h_found=HasChessBoard(imgh,imgh_gray,h_cners);
        if(h_found)
            cv::cornerSubPix(imgh_gray, h_cners, cv::Size(11, 11), cv::Size(-1, -1),cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
        Mat rvec_c,rvec_h,tvec_c,tvec_h;
        cv::solvePnP(obj_pts,c_cners,cam_c,dist_c,rvec_c,tvec_c);
        cv::solvePnP(obj_pts,h_cners,cam_h,dist_h,rvec_h,tvec_h);
        Mat rrvec_c,rrvec_h;
        cv::Rodrigues(rvec_c,rrvec_c);
        cv::Rodrigues(rvec_h,rrvec_h);
        Mat r1=rrvec_c*R;
        Mat t1=tvec_c+T;

        Mat img1=imgh.clone();
        draw_chess(imgh,rrvec_h,tvec_h,cam_h,dist_h);
        imshow("pic1",imgh);
        draw_chess(img1,r1,t1,cam_h,dist_h);
        imshow("pic2",img1);


        char resc=waitKey(0);
        if(resc=='q')
            exit(1);

    }
}

And below is the result i tested by using the sample in opencv.

enter image description here

I don't think it was low calibration accuracy,because i use the opencv's sample and the cv::stereoCalibrate() return rms less than 1 pixel.

Any advice is appreciated. Thank you!

2

2 Answers

2
votes

The formulas are:

  • pose for the camera 1 (in homogeneous matrix):

formula1

  • homogeneous transformation from camera 1 to camera 2:

formula2

  • pose for camera 2:

formula3

0
votes

For checking the accuracy of your stereo calibration, I would consider a different approach:

  1. Use the function stereoRectify to get the rectification transformation for the camera. Use the translation and rotation matrices you got from stereoCalibrate.
  2. Perform initUndistortRectifyMap once for each camera. Use
  3. Use the results you got to remap the images from both cameras.

If your calibration went well, the output images should be rectified and undistorted.