The purpose is to generate disparity map for a calibrated stereo images.
A 3D model is projected onto a pair of calibrated stereo images (left/right) by using OpenCV function cv::projectPoints(). The cv::projectPoints() gives points on the 2D image coordinate in cv::Point2f which is subpixel accuracy.
As some 3D points are projected onto same pixel region, I only keep the point that have smaller depth/Z due to the fact that further points are occluded by the closer point.
By doing this, I can get two index images for left and right image respectively. Each pixel of the index image refer to its position in the 3D model (stored in std::vector) or in 2D (std::vector).
Following snippet should briefly explain the procedures:
std::vector<cv::Point3f> model_3D;
std::vector<cv::point2f> projectedPointsL, projectedPointsR;
cv::projectPoints(model_3D, rvec, tvec, P1.colRange(0,3), cv::noArray(), projectedPointsL);
cv::projectPoints(model_3D, rvec, tvec, P2.colRange(0,3), cv::noArray(), projectedPointsR);
// Each pixel in indexImage is a index pointing to a position in the vector of projected points
cv::Mat indexImageL, indexImageR;
// This function filter projected points and return the index image
filterProjectedPoints(projectedPointsL, model_3D, indexImageL);
filterProjectedPoints(projectedPointsR, model_3D, indexImageR);
In order to generate the disparity map, I can either:
1.For each pixel in the disparity map, find the corresponding pixel position in the left/right index images and subtract their position. This way gives integer disparity (not subpixel accuracy);
2.For each pixel in the disparity map, find its 2D (floating accuracy) position on both left/right projected points and calculate the difference in x axis as disparity. This way gives subpixel accuracy disparity.
The first way is straightforward and introduces error due to ignoring subpixel projected points. However, the second way also introduces error as a pair of projected pixels (from same 3D points) may be projected into different location within a pixel. For example, a projected point in left image is (115.289, 80.393), in right image is (145.686, 79.883). Its position in disparity map will be (115, 80) and disparity can be: 145.686 - 115.289 = 30.397. As you can see, they may not be exactly row aligned to have same y coordinate.
Questions are: 1. Are both ways correct (except introducing error)? 2. If the 2nd way is correct, if the error is ignoble when computing subpixel accuracy disparity.
Well, you can also tell me how would you calculate subpixel disparity map in this scenario.