I use the following code to georeference images
with input
grid = "for example a utm grid"
img_raw = cv2.imread(filename)
mtx, dist = "intrinsic camera matrix and
distortion coefficient from calibration matrix"
src_pts = "camera location of gcp on undistorted image"
dst_pts = "world location of gcp in the grid coordinate"
I correct camera distortion and apply homography
img = cv2.undistort(img_raw, mtx, dist, None, None)
H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
img_geo = cv2.warpPerspective(img,(grid.shape[0],grid.shape[1]),\
flags=cv2.INTER_NEAREST,borderValue=0)
then I want to get the location of the camera. I try to use the rotation and translation matrix calculated in cv2.solvePnP as shown here for example. If I am right, I need camera and world coordinates for at least a set of 4 coplanar points.
flag, rvec, tvec = cv2.solvePnP(world, cam, mtx, dist)
Again if I am right, in solvePnP, the camera coordinates need to be from the raw image frame and not the undistorted frame as in src_pts.
So my question is, how could I get the pixels location of src_pts in the raw image frame? Or is there any other way to get rvec and tvec?