3
votes

I am trying to use openCV for camera calibration. I do not have a problem as long as I use the cv2.findChessBoardCorners to find my calibration targets in the image, but if I use my own function to find the points and build an array with the points, I get an error when trying to estimate the camera parameters. Here is an example that will throw the same error.

import numpy as np
import cv2


pattern_size         = (4, 3)
pattern_points       = np.zeros( (np.prod(pattern_size), 3), np.float32 )
pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2)
pattern_points      *= 20

obj_points = []
img_points = []

for fn in range(5):
    corners = np.asarray(pattern_points[:,1:], dtype=np.float32)

    img_points.append(corners.reshape(-1, 2))
    obj_points.append(pattern_points)


ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(obj_points,
                                                   img_points,
                                                   (1088, 2048),
                                                   None,
                                                   None)

If I instead make the corners array with the usual

ret, corners = cv2.findChessboardCorners(gray, (4,3))

it works fine. The type of corners is an ndarray size (12,2) in both cases and the elements are float32.

Why do I get this error:

OpenCV Error: Unsupported format or combination of formats (imagePoints1 should contain vector of vectors of points of type Point2f) in cv::collectCalibrationData, file C:\builds\master_PackSlaveAddon-win32-vc12-static\opencv\modules\calib3d\src\calibration.cpp, line 2982

when I try to construct the img_points array from scratch instead of using cv2.findChessboardCorners?

1
Could you give a minimal reproducible example please. Trying to run your code gives (inter alia): cv2.error: /home/openstack/opencv/opencv/opencv-3.0.0/modules/calib3d/src/calibration.cpp:2982: error: (-210) imagePoints1 should contain vector of vectors of points of type Point2f in function collectCalibrationDataboardrider
Thanks. Yes, I pasted the wrong error. I will correct that right away! The problem description is correct though and the code reproduces the error, so it would be great if you could give me any pointers.julietKiloRomeo
I've been having the same issue. I solved it by using a vector of vectors, as documented; for each frame, imPts = [ [px0, py0, pz0],..., [pxn, pyn, pzn] ] and obPts = [ [qx0, qy0],..., [qxn, qyn] ], then do: imPts.astype('float32'), obPts.astype('float32') when using them inside the function. If more than one frame is used, then do that for each frame. Hope that does the tricktrox
ok. That works. Thanks. I made the following change to the example: img_points.append(corners.reshape(-1, 2).astype('float32')) obj_points.append(pattern_points.astype('float32')) Since I have already casted to float32 I was surprised that it made a difference.julietKiloRomeo
can we consider this as solved?trox

1 Answers

2
votes

I've been having the same issue. I solved it by using a vector of vectors, as documented; for each frame, imPts = [ [px0, py0, pz0],..., [pxn, pyn, pzn] ], and obPts = [ [qx0, qy0],..., [qxn, qyn] ], then do: imPts.astype('float32') and obPts.astype('float32'), when using them inside the function. If more than one frame is used, then do that for each frame. That does the trick.