Poster
Real-Time Reinforcement Learning
Simon Ramstedt · Chris Pal
East Exhibition Hall B, C #209
Keywords: [ Reinforcement Learning ] [ Reinforcement Learning and Planning ] [ Markov Decision Processes ] [ Reinforcement Learning and Planning -> Decision and Control; Reinforcement Learning and Planning ]
Markov Decision Processes (MDPs), the mathematical framework underlying most algorithms in Reinforcement Learning (RL), are often used in a way that wrongfully assumes that the state of an agent's environment does not change during action selection. As RL systems based on MDPs begin to find application in real-world, safety-critical situations, this mismatch between the assumptions underlying classical MDPs and the reality of real-time computation may lead to undesirable outcomes. In this paper, we introduce a new framework, in which states and actions evolve simultaneously and show how it is related to the classical MDP formulation. We analyze existing algorithms under the new real-time formulation and show why they are suboptimal when used in real time. We then use those insights to create a new algorithm Real-Time Actor-Critic (RTAC) that outperforms the existing state-of-the-art continuous control algorithm Soft Actor-Critic both in real-time and non-real-time settings.. Code and videos can be found at https://github.com/rmst/rtrl.
Live content is unavailable. Log in and register to view live content