I'm working on a stitching tool using OpenCV and CUDA, currently just attempting to stitch two overlapping images.
The images are:
My process is as follows:
- Feature Detection on both images (ORB)
- Correlation Matching using brute force (Hamming)
- RANSAC estimation to produce homography matrix
- Warping second image by the homography matrix
I'm fairly happy I have these steps right, using opencv to draw a black image and then drawing the frames of the two images after this process yields the following result:
This looks about correct. What I'm having a problem with now is the blending. I want to use feather blending but the results I'm getting are entirely wrong. I've tried the following code where img1 is the cv::Mat of the first image and img2Warped is the cv::Mat of the second image after warping by the H matrix, finalImg is an empty cv::Mat.
auto blender = detail::Blender::createDefault(detail::Blender::FEATHER, true);
auto combinedCorners = std::vector<Point>{ image1Corners[0], image2Corners[0] };
auto combinedSizes = std::vector<cv::Size>{ image1Size, image2Size };
blender->prepare(combinedCorners, combinedSizes);
img1.convertTo(img1, CV_16SC3);
img2Warped.convertTo(img2Warped, CV_16SC3);
blender->feed(img1, Mat::ones(img1.size(), CV_8U), image1Corners[0]);
blender->feed(img2Warped, Mat::ones(img2Warped.size(), CV_8U), image2Corners[0]);
blender->blend(finalImg, Mat());
This code produces the following output:
Can anyone advise on where I'm going wrong with the blending? I can't find any examples or help online using the blending technique I'm trying to use here.