I am working on a project for estimating a UAV (quadcopter) location using optical-flow technique. I currently have a code that is using farneback algorithm from OpenCV. The current code is working fine when the camera is always pointing to the ground.
Now, I want to add support to the case when the camera is not pointing straight down - meaning that the quadcopter now has a pitch / roll / yaw (Euler angles). The quadcopters Euler angles are known and I am searching for a method to compute and apply the transformation needed based on the known current Euler angles. So that the result image will be as if it was taken from above (see image below).
I found methods that calculates the transformation when having 2 sets (source and destination) of 4 corners via findHomography
or getPerspectiveTransform
functions from OpenCV. But I couldn't find any methods that can do it with knowing only Euler angle (because I don't know the destination image corenrs).
So my question is what method can I use and how in order to transform a frame to be as if it was taken from above using only Euler angles and camera height from ground if necessary?
In order to demonstrate what I need:
The relevant part of my current code is below:
for(;;)
{
Mat m, disp, warp;
vector<Point2f> corners;
// take out frame- still distorted
cap >> origFrame;
// undistort the frame using the calibration parameters
cv::undistort(origFrame, undistortFrame, cameraMatrix, distCoeffs, noArray());
// lower the process effort by transforming the picture to gray
cvtColor(undistortFrame, gray, COLOR_BGR2GRAY);
if( !prevgray.empty() )
{
// calculate flow
calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3/*def 3 */, 10/* def 15*/, 3, 3, 1.2 /* def 1.2*/, 0);
uflow.copyTo(flow);
// get average
calcAvgOpticalFlow(flow, 16, corners);
// calculate range of view - 2*tan(fov/2)*distance
rovX = 2*0.44523*distanceSonar*100; // 2 * tan(48/2) * dist(cm)
rovY = 2*0.32492*distanceSonar*100; // 2 * tan(36/2) * dist(cm)
// calculate final x, y location
location[0] += (currLocation.x/WIDTH_RES)*rovX;
location[1] += (currLocation.y/HEIGHT_RES)*rovY;
}
//break conditions
if(waitKey(1)>=0)
break;
if(end_run)
break;
std::swap(prevgray, gray);
}
UPDATE:
After successfully adding the rotation, I still need my image to be centered (and not to go outside of the frame window as shown below). I guess I need some kind of translation. I want the center of the source image to be at the center of the destination image. How can I add this as well?
The rotation function that works:
void rotateFrame(const Mat &input, Mat &output, Mat &A , double roll, double pitch, double yaw){
Mat Rx = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cos(roll), -sin(roll),
0, sin(roll), cos(roll));
Mat Ry = (Mat_<double>(3, 3) <<
cos(pitch), 0, sin(pitch),
0, 1, 0,
-sin(pitch), 0, cos(pitch));
Mat Rz = (Mat_<double>(3, 3) <<
cos(yaw), -sin(yaw), 0,
sin(yaw), cos(yaw), 0,
0, 0, 1);
Mat R = Rx*Ry*Rz;
Mat trans = A*R*A.inv();
warpPerspective(input, output, trans, input.size());
}
When I run it with rotateFrame(origFrame, processedFrame, cameraMatrix, 0, 0, 0);
I get image as expected:
But when I run it with 10 degrees in roll rotateFrame(origFrame, processedFrame, cameraMatrix, 20*(M_PI/180), 0, 0);
. The image is getting out of the frame window: