I want to fuse LiDAR {X,Y,Z,1} points on camera image {u,v} for which we have LiDAR points, camera matrix (K), distortion coefficient (D), position of camera and LiDAR (x,y,z), rotation of camera and LiDAR (w+xi+yj+zk). There are three coordinates system involved. Vehicle axle coordinate system(X:forward, Y:Left, Z: upward), LiDAR coordinate (X:Right, Y:Forward, Z: Up) and camera coordinate system (X: Right, Y:down, Z: Forward). I tried the below approach but the points are not fusing properly. All points are wrongly plotted.
Coordinate system:
For given Rotation and Position of camera and LiDAR we compute the translation using below equation.
t_lidar = R_lidar * Position_lidar^T
t_camera = R_camera *Position_camera^T
Then relative rotation and translation is computed as flows
R_relative = R_camera^T * R_lidar
t_relative = t_lidar - t_camera
Then the final Transformation Matrix and point transformation between LiDAR Points [X,Y,Z,1] and image frame [u,v,1] is given by:
T = [ R_relative | t_relative ]
[u,v,1]^T = K * T * [X,Y,Z,1]^T
Is there anything which I am missing?
