I was waiting for this question. In my master degree I developed exactly that, a global optimization function to refine rotations in loopclosure cases. The loopcloruse rotation matrix should be the SO(3) identity, rigth? But because of errors it's not that, it's something sligtly near. Let't call it "loopclosure rotation", and represent it by a quaternion called q_loop
Just interpolate, using SLERP tecnique, between poses rotations and these rotations multiplied by q_loop^(-1)
. The trick is to interpolate in a interval proportional to each pose in the circuit. Because each pose acumulates different magnitude of errors, that interval could not be constant, rigth? To what is this error proportional? To the number of transformations acumulated used to calculate that pose, so the same is true to these intervals.
If you have n
clouds, then you need n-1
poses to put all in the global origin. The first pose is P1 = (R1,t1)
, to optimize R1
, first, transform it to a quaternion q1
, then, multiply it by q_loop^-1
, let's call it p1 = q1*q_loop^-1
. The first optimal rotation is q1_optimal = SLERP(q1, p1, 1/n)
. The second optimal rotation is q2_optimal = SLERP(q2, p2, 2/n)
.
Acording to this patter, if you have n=4
clouds, the optimal poses rotations will be
q1 = SLERP(q1, p1, 1/4)
q2 = SLERP(q2, p2, 2/4)
q3 = SLERP(q3, p3, 3/4)
[1/4, 2/4, 3/4]
are the intervals of interpolation, wich varies [0,1]
These intervals are distributing the error from final poses to the firsts, but also decreasing the total erro. They can be demonstrated aplying least squares in the expoents of SLERP tecnique under the restriction that loopclosure quaternion should be the identity. When we do qn*q_loop^(-1)
and got pn
, pn
is the rotation composed going through inverse way in the circuit. But to the first pose rotation, going this way acumulates more error, so the first rotation is interpolated near 0
, wich explains 1/4
. The last pose rotations are going to be interpolated near 1
, because going in the inverse way acumulate less errors, wich explains 3/4
to pose P3.
With n=4
clouds, one of the rotations are interpolated exactly in the middle, 2/4 = 0.5
, because going one way or another, that pose will acumulate two rotations. This method assumes uniform error distribution in the rotation of each par, this is not true, but the model was tested and the results are great in point clouds from TLS and from robots (velodyne LIDAR). Work not published yet.
I'm going to explain how to find the loopcloruse rotation using the image above. Supose that you have a circuit of TLS stations around a object, like in (a). it can be any other type of circuit, with any geometry, and don't matter the number of clouds. You will need to register the clouds sequentialy, starting by E0, the global origin. Do the registration like that:
E1(source)->E0(target) = T10 (4x4 matrix),
E2(source)->E1(target) = T21 (4x4 matrix),
E3(source)->E2(target) = T32 (4x4 matrix),
E4(source)->E3(target) = T43 (4x4 matrix),
E0(source)->E4(target) = T04 (4x4 matrix).
Then, take the rotation R
from each transformation T
, and multyply in this order: R04*R43*R32*R21*R10
. What you'll obtain? Something close to the identity matrix 3x3, wich represent nule rotation. If you use quaternions, you'll find something close to q_loop = (1,0i,0j,0k)
. With i,j,k = -1^1/2
.
Figura (b) shows pose rotations, to find them do:
R_pose1 = R10
R_pose2 = R21*R10
R_pose3 = R32*R21*R10
R_pose4 = R43*R32*R21*R10
R_loop = R04*R43*R32*R21*R10
This image is a circuit with 901 clouds, the error accumulated (drift) in each registration makes the two ends not coincide, the start and the end of the circuit differs by more tham 80 m. After the global optimization this error is attenuated significantly, and the ends coincide.
To say the truth, these results were obtained using two global optimizations, one for rotations, wich I explained above, and other to translations, as suggested in (LU & MILIOS, 1997). And remember: only one closed circuit is being treated. There's no use of graphs with multiple paths and cicles here. When dealing with this kind of optimization you'll naturally think in how to use multiply sobrepositions, but avoid it, that space it's just too big.