2
votes

I am doing structure from motion from multiple images using OpenCV. I have 8 images, and I have generated 3D point clouds for each pair of images (img1&2, img2&3, img3&4 etc.). I know that each individual 3D point cloud is correct because they look good when displayed in VTK / OpenGL.

My cameras are (roughly) calibrated, using EXIF metadata for the focal length and the center of the image as the principal point.

How do I transform each of these 3D point clouds into the 3D coordinate system of the leftmost camera?

1

1 Answers

0
votes

I am assuming you have your point-clouds stored in a PCL compatible format; then you can simply use pcl::transformPointCloud.. If not, then you will need to implement your own based on the source code given in transforms.hpp..

HTH

EDIT:

Please refer to slides 16-19 in this presentation. The transformation model is,

P_c = R_c (P_w - C)

This is the mathematical form of the the transformation given in my previous link.