Poster
in
Workshop: Optimization for ML Workshop
Efficient Levenberg-Marquat for SLAM
Amir Belder · REFAEL VIVANTI
Abstract:
The Levenberg-Marquardt optimization algorithm is widely used in many applications and is well-known for its use in Bundle Adjustment (BA), a common method for solving localization and mapping problems. BA is an iterative process in which a system of non-linear equations is solved using two optimization methods: Gauss-Newton (GN), which requires considerable computational resources due to the calculation of the Hessian, and Gradient Descent (GD). Both methods are weighted by a damping factor, $\lambda$, which is heuristically chosen by the Levenberg-Marquardt algorithm at each iteration. Each method is better suited to different parts of the solving process. However, in the classic approach, the computationally expensive GN is calculated in every iteration, even though it may not be necessary in all cases. Therefore, we propose predicting in which iterations the GN calculation can be skipped altogether by viewing the problem holistically and formulating it as a Reinforcement Learning (RL) task, by extending a previous solution that also predicts the value of $\lambda$. We demonstrate that our method reduces the time required for BA convergence by an average of 12.5%.
Chat is not available.