3
votes

I have a setup where a (2D) camera is mounted on the end-effector of a robot arm - similar to the OpenCV documentation: enter image description here

I want to calibrate the camera and find the transformation from camera to end-effector. I have already calibrated the camera using this OpenCV guide, Camera Calibration, with a checkerboard where the undistorted images are obtained.

My problem is about finding the transformation from camera to end-effector. I can see that OpenCV has a function, calibrateHandEye(), which supposely should achieve this. I already have the "gripper2base" vectors and are missing the "target2cam" vectors. Should this be based on the size of the checkerboard squares or what am I missing? Any guidance in the right direction will be appreciated.

1
Imho Camera Calibration is not enough, because you only fix the intrisics of your camera system. To get the exterior orientation of the camera you would have to know the size of your calibration pattern (checkerboard). In the paper mentioned in the OpenCV documentation, they call it Calibration Block world coordinate frame (CW, page 347, Fig. 2). These should be the target2cam vectors (I did not work with robot arms yet).Grillteller
i have found some time ago an interesting code by George Hotz "live coding SLAM" video . There are many useful tricks about it . take a lookMario Abbruscato
Are you implementing everything in ROS, in ROS we can set the TF of camera and the EF.Arjun Kumar
It is not ROS - it is a pure python implementationChrisRun

1 Answers

2
votes

You are close to the answer.

Yes, it is based on the size of the checkerboard. But instead of directly taking those parameters and an image, this function is taking target2cam. How to get target2cam? Just simply move your robot arm above the chessboard so that the camera can see the chessboard and take a picture. From the picture of the chessboard and camera intrinsics, you can find target2cam. Calculating the extrinsic from the chessboard is already given in opencv.

Repeat this a couple of times at different robot poses and collect multiple target2cam. Put them calibrateHandEye() and you will get what you need.