Use-case
- An object is rotating around it’s center in varying speed
- A fixed camera is looking at the object
- Given 2D Image points correspondence reconstruct the 3D Point Cloud
- As the object rotates so a different part of it is seen to the Camera, and thus, different points & correspondences are detected.
Scene
a. N Images
b. N-1 Image pairs
c. N-1 2D Point correspondence ( Two 2D Points arrays )
Implementation
For each of the (N-1) 2D Points correspondences
- Compute Camera relative Pose
- Triangulate to result the 3D Points
- For each 2 3D Points arrays, Derive correspondence using the 2D Correspondence given at [c]
- Using 3D Correspondence derived @ [3] derive the track of each of the object 3D points resulting a single track for each of the Object Points/Vertices
Result:
A (N–2) 3D Point arrays, Correspondences, camera poses and Tracks ( one track for each object point )
Approach considered to resolve the problem:
Given that the triangulation result is accurate up to a scale, calculate the point cloud.
A. Each of the triangulation results and Camera relative Translations are
expressed in NON-homogeneous coordinates ( each result has a different scale ).
B. Under the assumption that the object structure is solid and thus, does not change,
the distance of each of the 3D points to its center should be identical for all camera poses.
C. Having [B] in mind, all triangulated 3D points at [A] and Cameras Translations
can be converted to a homogeneous coordinate system.
D. Select one of the Camera poses and Transform the first Point in each Track ( defined @ [4] )
to that Camera Pose ( Transform by the inverse of the accumulated Camera
Pose ), resulting, the expected point could.