3
votes

I am working on a camera calibration program using the OpenCV/Python example (from: OpenCV Tutorials) as a guidebook.

Question: How do I tailor this example code to account for the size of a square on a particular chessboard pattern? My understanding of the camera calibration process is that this information must somehow be used otherwise the values given by:

cv2.calibrateCamera()

will be incorrect.

Here is the portion of my code that reads in image files and runs through the calibration process to produce the camera matrix and other values.

#import cv2
#import numpy as np
#import glob

"""
Corner Finding
"""
# termination criteria 
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# Prepare object points, like (0,0,0), (1,0,0), ....,(6,5,0)
objp = np.zeros((5*5,3), np.float32)
objp[:,:2] = np.mgrid[0:5,0:5].T.reshape(-1,2)

# Arrays to store object points and image points from all images
objpoints = []
imgpoints = []

counting = 0

# Import Images
images = glob.glob('dir/sub dir/Images/*')

for fname in images:

    img = cv2.imread(fname)     # Read images
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)    # Convert to grayscale



    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (5,5), None)

    # if found, add object points, image points (after refining them)
    if ret == True:
        objpoints.append(objp)

        cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
        imgpoints.append(corners)

        #Draw and display corners
        cv2.drawChessboardCorners(img, (5,5), corners, ret)
        counting += 1

        print str(counting) + ' Viable Image(s)'

        cv2.imshow('img', img)
        cv2.waitKey(500)

cv2.destroyAllWindows()        


# Calibrate Camera    
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)    
2
objpoints are your chessboard points in 3d space. probably they are (0,0);(0,1);(1,1);(1,0); (2,0); etc. in that example the square size (and edge length) is 1. just resize those point positions to get any other square sizeMicka
@Micka So if my chessboard has squares averaging 25.3mm then those values should count up as (0,0);(0,0.0253):.... ?M. Ruffolo
exactly. Or (0, 25.3) if you prefer your 3D coordinate (and camera intrinsics) unit to be [mm].Micka

2 Answers

5
votes

Here, if you have your square size assume 30 mm then multiply this value with objp[:,:2]. Like this

objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)*30   # 30 mm size of square

As objp[:,:2] is a set of points of checkboard corners given as (0,0),(0,1), (0,2) ....(8,5). (0,0) point is the left upper most square corner and (8,5) is the right lowest square corner. In this case, these points have no unit but if we multiply these values with square size (for example 30 mm), then these will become (0,0),(0,30), .....(240,150) which are the real world units. Your translation vector will be in mm units in this case.

0
votes

From here: https://docs.opencv.org/4.5.1/dc/dbb/tutorial_py_calibration.html

What about the 3D points from real world space? Those images are taken from a static camera and chess boards are placed at different locations and orientations. So we need to know (X,Y,Z) values. But for simplicity, we can say chess board was kept stationary at XY plane, (so Z=0 always) and camera was moved accordingly. This consideration helps us to find only X,Y values. Now for X,Y values, we can simply pass the points as (0,0), (1,0), (2,0), ... which denotes the location of points. In this case, the results we get will be in the scale of size of chess board square. But if we know the square size, (say 30 mm), we can pass the values as (0,0), (30,0), (60,0), ... . Thus, we get the results in mm. (In this case, we don't know square size since we didn't take those images, so we pass in terms of square size).