4
votes

I am interested in finding the disparity map of a scene. To start with, I did stereo calibration using the following code (I wrote it myself with a little help from Google, after failing to find any helpful tutorials for the same written in python for OpenCV 2.4.10).

I took images of a chessboard simultaneously on both cameras and saved them as left*.jpg and right*.jpg.

import numpy as np
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)


# Arrays to store object points and image points from all the images.
objpointsL = [] # 3d point in real world space
imgpointsL = [] # 2d points in image plane.
objpointsR = []
imgpointsR = []

images = glob.glob('left*.jpg')

for fname in images:
    img = cv2.imread(fname)
    grayL = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, cornersL = cv2.findChessboardCorners(grayL, (9,6),None)
    # If found, add object points, image points (after refining them)
    if ret == True:
        objpointsL.append(objp)

        cv2.cornerSubPix(grayL,cornersL,(11,11),(-1,-1),criteria)
        imgpointsL.append(cornersL)


images = glob.glob('right*.jpg')

for fname in images:
    img = cv2.imread(fname)
    grayR = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, cornersR = cv2.findChessboardCorners(grayR, (9,6),None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        objpointsR.append(objp)

        cv2.cornerSubPix(grayR,cornersR,(11,11),(-1,-1),criteria)
        imgpointsR.append(cornersR)



retval,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320,240))

How do I rectify the images? What other steps should I do before going on to find the disparity map? I read somewhere that while calculating the disparity map, the features detected on both frames should lie on the same horizontal line. Please help me out here. Any help would be much appreciated.

3
just a note, at this point I have a lot of experience with this, and feel comfortable saying for certain that the image rectification code in opencv is not as robust as the Bouguet matlab toolbox. You may find major overfitting in opencv if your data is good, in which case rectification will fail, and you'll get mostly black images. The Bouguet toolbox is nice because you can exclude parameters from the regression with more control than opencv in order to reduce overfitting on the stereo calibration. Maybe you won't find this problem.kevinkayaks

3 Answers

3
votes

you need cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2 and "newCameraMatrix" for cv2.undistort()

you can get "newCameraMatrix" using cv2.getOptimalNewCameraMatrix()

so in the remainder of your script paste this:

# Assuming you have left01.jpg and right01.jpg that you want to rectify
lFrame = cv2.imread('left01.jpg')
rFrame = cv2.imread('right01.jpg')
w, h = lFrame.shape[:2] # both frames should be of same shape
frames = [lFrame, rFrame]

# Params from camera calibration
camMats = [cameraMatrix1, cameraMatrix2]
distCoeffs = [distCoeffs1, distCoeffs2]

camSources = [0,1]
for src in camSources:
    distCoeffs[src][0][4] = 0.0 # use only the first 2 values in distCoeffs

# The rectification process
newCams = [0,0]
roi = [0,0]
for src in camSources:
    newCams[src], roi[src] = cv2.getOptimalNewCameraMatrix(cameraMatrix = camMats[src], 
                                                           distCoeffs = distCoeffs[src], 
                                                           imageSize = (w,h), 
                                                           alpha = 0)



rectFrames = [0,0]
for src in camSources:
        rectFrames[src] = cv2.undistort(frames[src], 
                                        camMats[src], 
                                        distCoeffs[src])

# See the results
view = np.hstack([frames[0], frames[1]])    
rectView = np.hstack([rectFrames[0], rectFrames[1]])

cv2.imshow('view', view)
cv2.imshow('rectView', rectView)

# Wait indefinitely for any keypress
cv2.waitKey(0)

hope that gets you on your way to the next thing which might be calculating "disparity maps" ;)

Reference:

http://www.janeriksolem.net/2014/05/how-to-calibrate-camera-with-opencv-and.html

1
votes

Try this code, I already have been able to solve the mistake:

retVal, cm1, dc1, cm2, dc2, r, t, e, f = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320, 240), None, None,None,None)
0
votes

firstly, use the opencv calibration application or matlab calibration toolbox to calculate your camera parameters. with the parameters, you can rectify your images.

after rectification, refer to the python sample in opencv's codebase (samples/python/stereo_match.py) to compute the disparity map.