I am working on a camera-lidar calibration and I am stuck for some time on the following problem:
I am using a usb camera and a 2D lidar. I have the coordinates of the corresponding points in both lidar frame and camera frame (lets say that I have 3 points and their coordinates in lidar frame and the coordinate of the same 3 points in camera frame).
Example for one point:
lidar_pt1(xl, yl)
camera_pt1(xc, yc, zc)
...
are known.
If I hardcode the transformation matrix I get an expected result. Now I am trying not to hardcode it, but to automatically calculate it using the known values of the coordinates. What I have is 3 points in 2D coordinates in the lidar frame and exact 3 points in as 3D coordinates in the camera frame. It is here where I am struggling with the math to somehow calculate the rotation based on the coordinates value that I have. Is there a way to get that rotation?
camera_pt1 = TransformMat * lidarpt1
TransformMat = ?
I saw some examples using SVD (http://nghiaho.com/?page_id=671) but I think they require bigger sets of data and the minimum of 3 points would not give the best result.