2
votes

I am looking to understand how to implement a simple loop closing algorithm.

Here is my situation : I have x numbers of point clouds, and I have used a registration algorithm in 3d that has given me the pose of all these point clouds.

In the end, I more or less end up at the same point in my map, but with drift. I can use my registration algorithm to see where my actual final point cloud is located in regards to my initial one.

Knowing that, I would like to globally optimize the rest of my point clouds all the way to my initial one, based on the "drift" that I calculated.

I have managed to quickly code something in regards to the translation, which seems correct, but the rotation is problematic, as in the accuracy/superposition of features (walls etc) is reduced.

What I have been looking at : g2o, GTSAM, ISAM libraries, all looking to optimize, but I feel like there is a huge overhead to implementing those, they all require multiple constraints, setting huge number of parameters etc..

I am not even looking to detect loops automatically (I'll do that later), I would just like to do: These two point clouds represent a loop, propagate (correctly) the drift in translation and rotation(that I calculate) between them to all point clouds between the two.

Thanks in advance,

2

2 Answers

1
votes

Loop closing as a whole is usually decomposed in these steps:

1- Loop detection 2- Loop closing 3- Loop correction

You don't need automatic loop detection right now, I'll skip it.

Loop closing is the problem of obtaining the right pose of one end to fit the other end. The pose is a SE3 transform (3D rototraslation), or if you have scale drifting (like monocular visual slam's), the pose is a Sim3 transform (3D similarity).

With this pose, Loop correction is the problem of correcting all points and keyframes to bring coherence to the whole map.

Scale Drift-Aware Large Scale Monocular SLAM is a paper from Strasdat et al describing a method to correct loops (the third step).

ORB-SLAM2 implements it in Optimizer::OptimizeEssentialGraph, and is open source.

1
votes

I was waiting for this question. In my master degree I developed exactly that, a global optimization function to refine rotations in loopclosure cases. The loopcloruse rotation matrix should be the SO(3) identity, rigth? But because of errors it's not that, it's something sligtly near. Let't call it "loopclosure rotation", and represent it by a quaternion called q_loop

Just interpolate, using SLERP tecnique, between poses rotations and these rotations multiplied by q_loop^(-1). The trick is to interpolate in a interval proportional to each pose in the circuit. Because each pose acumulates different magnitude of errors, that interval could not be constant, rigth? To what is this error proportional? To the number of transformations acumulated used to calculate that pose, so the same is true to these intervals.

If you have n clouds, then you need n-1 poses to put all in the global origin. The first pose is P1 = (R1,t1), to optimize R1, first, transform it to a quaternion q1, then, multiply it by q_loop^-1, let's call it p1 = q1*q_loop^-1. The first optimal rotation is q1_optimal = SLERP(q1, p1, 1/n). The second optimal rotation is q2_optimal = SLERP(q2, p2, 2/n).

Acording to this patter, if you have n=4 clouds, the optimal poses rotations will be

q1 = SLERP(q1, p1, 1/4)

q2 = SLERP(q2, p2, 2/4)

q3 = SLERP(q3, p3, 3/4)

[1/4, 2/4, 3/4] are the intervals of interpolation, wich varies [0,1]

These intervals are distributing the error from final poses to the firsts, but also decreasing the total erro. They can be demonstrated aplying least squares in the expoents of SLERP tecnique under the restriction that loopclosure quaternion should be the identity. When we do qn*q_loop^(-1) and got pn, pn is the rotation composed going through inverse way in the circuit. But to the first pose rotation, going this way acumulates more error, so the first rotation is interpolated near 0, wich explains 1/4. The last pose rotations are going to be interpolated near 1, because going in the inverse way acumulate less errors, wich explains 3/4 to pose P3.

With n=4 clouds, one of the rotations are interpolated exactly in the middle, 2/4 = 0.5, because going one way or another, that pose will acumulate two rotations. This method assumes uniform error distribution in the rotation of each par, this is not true, but the model was tested and the results are great in point clouds from TLS and from robots (velodyne LIDAR). Work not published yet.

enter image description here

I'm going to explain how to find the loopcloruse rotation using the image above. Supose that you have a circuit of TLS stations around a object, like in (a). it can be any other type of circuit, with any geometry, and don't matter the number of clouds. You will need to register the clouds sequentialy, starting by E0, the global origin. Do the registration like that:

E1(source)->E0(target) = T10 (4x4 matrix),
E2(source)->E1(target) = T21 (4x4 matrix),
E3(source)->E2(target) = T32 (4x4 matrix),
E4(source)->E3(target) = T43 (4x4 matrix), 
E0(source)->E4(target) = T04 (4x4 matrix).

Then, take the rotation R from each transformation T, and multyply in this order: R04*R43*R32*R21*R10. What you'll obtain? Something close to the identity matrix 3x3, wich represent nule rotation. If you use quaternions, you'll find something close to q_loop = (1,0i,0j,0k). With i,j,k = -1^1/2.

Figura (b) shows pose rotations, to find them do:

R_pose1 = R10
R_pose2 = R21*R10
R_pose3 = R32*R21*R10
R_pose4 = R43*R32*R21*R10
R_loop  = R04*R43*R32*R21*R10

This image is a circuit with 901 clouds, the error accumulated (drift) in each registration makes the two ends not coincide, the start and the end of the circuit differs by more tham 80 m. After the global optimization this error is attenuated significantly, and the ends coincide. Result of SLERP in loopclosure

To say the truth, these results were obtained using two global optimizations, one for rotations, wich I explained above, and other to translations, as suggested in (LU & MILIOS, 1997). And remember: only one closed circuit is being treated. There's no use of graphs with multiple paths and cicles here. When dealing with this kind of optimization you'll naturally think in how to use multiply sobrepositions, but avoid it, that space it's just too big.