4
votes

I am working on a project for estimating a UAV (quadcopter) location using optical-flow technique. I currently have a code that is using farneback algorithm from OpenCV. The current code is working fine when the camera is always pointing to the ground.

Now, I want to add support to the case when the camera is not pointing straight down - meaning that the quadcopter now has a pitch / roll / yaw (Euler angles). The quadcopters Euler angles are known and I am searching for a method to compute and apply the transformation needed based on the known current Euler angles. So that the result image will be as if it was taken from above (see image below).

I found methods that calculates the transformation when having 2 sets (source and destination) of 4 corners via findHomography or getPerspectiveTransform functions from OpenCV. But I couldn't find any methods that can do it with knowing only Euler angle (because I don't know the destination image corenrs).

So my question is what method can I use and how in order to transform a frame to be as if it was taken from above using only Euler angles and camera height from ground if necessary?

In order to demonstrate what I need:

enter image description here

The relevant part of my current code is below:

for(;;)
{
    Mat m, disp, warp;
    vector<Point2f> corners;
    // take out frame- still distorted
    cap >> origFrame;
    // undistort the frame using the calibration parameters
    cv::undistort(origFrame, undistortFrame, cameraMatrix, distCoeffs, noArray());
    // lower the process effort by transforming the picture to gray
    cvtColor(undistortFrame, gray, COLOR_BGR2GRAY);

    if( !prevgray.empty() )
    {
        // calculate flow
        calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3/*def 3 */, 10/* def 15*/, 3, 3, 1.2 /* def 1.2*/, 0);
        uflow.copyTo(flow);

        // get average
        calcAvgOpticalFlow(flow, 16, corners);

        // calculate range of view - 2*tan(fov/2)*distance
        rovX = 2*0.44523*distanceSonar*100;     // 2 * tan(48/2) * dist(cm)
        rovY = 2*0.32492*distanceSonar*100;     // 2 * tan(36/2) * dist(cm)

        // calculate final x, y location
        location[0] += (currLocation.x/WIDTH_RES)*rovX;
        location[1] += (currLocation.y/HEIGHT_RES)*rovY;
    }
    //break conditions
    if(waitKey(1)>=0)
        break;
    if(end_run)
        break;
    std::swap(prevgray, gray);
}  

UPDATE:

After successfully adding the rotation, I still need my image to be centered (and not to go outside of the frame window as shown below). I guess I need some kind of translation. I want the center of the source image to be at the center of the destination image. How can I add this as well?

The rotation function that works:

void rotateFrame(const Mat &input, Mat &output, Mat &A , double roll, double pitch, double yaw){
    Mat Rx = (Mat_<double>(3, 3) <<
              1,          0,           0,
              0, cos(roll), -sin(roll),
              0, sin(roll),  cos(roll));
    Mat Ry = (Mat_<double>(3, 3) <<
              cos(pitch), 0, sin(pitch),
              0, 1,          0,
              -sin(pitch), 0,  cos(pitch));
    Mat Rz = (Mat_<double>(3, 3) <<
              cos(yaw), -sin(yaw), 0,
              sin(yaw),  cos(yaw), 0,
              0,          0,           1);

    Mat R = Rx*Ry*Rz;
    Mat trans = A*R*A.inv();

    warpPerspective(input, output, trans, input.size());
}

When I run it with rotateFrame(origFrame, processedFrame, cameraMatrix, 0, 0, 0); I get image as expected:

enter image description here

But when I run it with 10 degrees in roll rotateFrame(origFrame, processedFrame, cameraMatrix, 20*(M_PI/180), 0, 0);. The image is getting out of the frame window:

enter image description here

3
"FindHomography or getPerspectiveTransform functions from OpenCV. But none that were doing it without knowing the corners positions but with known angles" Are you sure they don't work with corners positions? Because I do my transformation using four cornersKhalil Khalaf
it wasn't clear enough, i've edited my post. I meant that I don't know the destination corners but I do know the Euler angles.A. Sarid

3 Answers

5
votes

If you have a calibration intrinsics matrix A (3x3), and there is no translation between camara poses, all you need to find homography H (3x3) is to construct rotation matrix R (3x3) from euler angles and apply the following formula:

H = A * R * A.inv()

Where .inv() is matrix invertion.

UPDATED:

If you want to center the image, you should just add translation this way: (this is finding the warped position of the center and translation of this point back to the center)

|dx|       | 320 / 2 |
|dy| = H * | 240 / 2 |
|1 |       | 1       |

    | 1 0 (320/2-dx) | 
W = | 0 1 (240/2-dy) | * H
    | 0 0 1          |

W is your final transformation.

2
votes

I came to a conclusion that I had to use the 4x4 Homography matrix in order to be able to get what I wanted. In order to find the right homography matrix we need:

  1. 3D Rotation matrix R.
  2. Camera calibration intrinsic matrix A1 and its inverted matrix A2.
  3. Translation matrix T.

We can compose the 3D rotation matrix R by multiplying the rotation matrices around axes X,Y,Z:

Mat R = RZ * RY * RX  

In order to apply the transformation on the image and keep it centered we need to add translation given by a 4x4 matrix, where dx=0; dy=0; dz=1 :

Mat T = (Mat_<double>(4, 4) <<
         1, 0, 0, dx,
         0, 1, 0, dy,
         0, 0, 1, dz,
         0, 0, 0, 1);

Given all these matrices we can compose our homography matrix H:

Mat H = A2 * (T * (R * A1))

With this homography matrix we can then use warpPerspective function from OpenCV to apply the transformation.

warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);

For conclusion and completeness of this solution here is the full code:

void rotateImage(const Mat &input, UMat &output, double roll, double pitch, double yaw,
                 double dx, double dy, double dz, double f, double cx, double cy)
  {
    // Camera Calibration Intrinsics Matrix
    Mat A2 = (Mat_<double>(3,4) <<
              f, 0, cx, 0,
              0, f, cy, 0,
              0, 0, 1,  0);
    // Inverted Camera Calibration Intrinsics Matrix
    Mat A1 = (Mat_<double>(4,3) <<
              1/f, 0,   -cx/f,
              0,   1/f, -cy/f,
              0,   0,   0,
              0,   0,   1);
    // Rotation matrices around the X, Y, and Z axis
    Mat RX = (Mat_<double>(4, 4) <<
              1, 0,         0,          0,
              0, cos(roll), -sin(roll), 0,
              0, sin(roll), cos(roll),  0,
              0, 0,         0,          1);
    Mat RY = (Mat_<double>(4, 4) <<
              cos(pitch),  0, sin(pitch), 0,
              0,           1, 0,          0,
              -sin(pitch), 0, cos(pitch), 0,
              0,           0, 0,          1);
    Mat RZ = (Mat_<double>(4, 4) <<
              cos(yaw), -sin(yaw), 0, 0,
              sin(yaw),  cos(yaw), 0, 0,
              0,          0,       1, 0,
              0,          0,       0, 1);
    // Translation matrix
    Mat T = (Mat_<double>(4, 4) <<
             1, 0, 0, dx,
             0, 1, 0, dy,
             0, 0, 1, dz,
             0, 0, 0, 1);
    // Compose rotation matrix with (RX, RY, RZ)
    Mat R = RZ * RY * RX;
    // Final transformation matrix
    Mat H = A2 * (T * (R * A1));
    // Apply matrix transformation
    warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);
}

Result: enter image description here

0
votes

This how I do it in Eigen and using 4 corners:

// Desired four corners
std::vector<Eigen::Vector2f> Normalized_Reference_Pattern = { Eigen::Vector2f(0, 0), Eigen::Vector2f(0, 2), Eigen::Vector2f(2, 0), Eigen::Vector2f(2, 2) }; 

// Current four points
std::vector<Eigen::Vector2f> CurrentCentroids = { /* Whatever four corners you want, but relative sueqnece to above */ };

// Transform for current to desired
auto Master_Transform = get_perspective_transform(CurrentCentroids, Normalized_Reference_Pattern);

// abilitu to use the same tranformation for other points (other than the corners) in the image
Eigen::Vector2f Master_Transform_Centroid = Master_Transform * Eigen::Vector2f(currentX, currentY);

And here is my black box:

Eigen::Matrix3f get_perspective_transform(const std::vector<Eigen::Vector2f>& points_from,const std::vector<Eigen::Vector2f>& points_to)
{
    cv::Mat transform_cv = cv::getPerspectiveTransform(
        convert::to_cv(points_from), 
        convert::to_cv(points_to));

    Eigen::Matrix3f transform_eigen;
    cv::cv2eigen(transform_cv, transform_eigen);
    return transform_eigen;
}