On the Convergence of Smooth Regularized Approximate Value Iteration Schemes
Elena Smirnova, Elvis Dohmatob
Spotlight presentation: Orals & Spotlights Track 31: Reinforcement Learning
on 2020-12-10T08:00:00-08:00 - 2020-12-10T08:10:00-08:00
on 2020-12-10T08:00:00-08:00 - 2020-12-10T08:10:00-08:00
Poster Session 6 (more posters)
on 2020-12-10T09:00:00-08:00 - 2020-12-10T11:00:00-08:00
GatherTown: Reinforcement Learning and Planning ( Town B2 - Spot A2 )
on 2020-12-10T09:00:00-08:00 - 2020-12-10T11:00:00-08:00
GatherTown: Reinforcement Learning and Planning ( Town B2 - Spot A2 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Entropy regularization, smoothing of Q-values and neural network function approximator are key components of the state-of-the-art reinforcement learning (RL) algorithms, such as Soft Actor-Critic~\cite{haarnoja2018soft}. Despite the widespread use, the impact of these core techniques on the convergence of RL algorithms is not yet fully understood. In this work, we analyse these techniques from error propagation perspective using the approximate dynamic programming framework. In particular, our analysis shows that (1) value smoothing results in increased stability of the algorithm in exchange for slower convergence, (2) entropy regularization reduces overestimation errors at the cost of modifying the original problem, (3) we study a combination of these techniques that describes the Soft Actor-Critic algorithm.