Poster
Predicting Future Actions of Reinforcement Learning Agents
Stephen Chung · Scott Niekum · David Krueger
As reinforcement learning agents become increasingly deployed in real-world scenarios, predicting future agent actions and events during deployment is important for facilitating better interaction and preventing catastrophic outcomes. This paper experimentally evaluates and compares the effectiveness of future action and event prediction for three types of RL agents: explicitly planning, implicitly planning, and non-planning. We employ two approaches: the inner state approach, which involves predicting based on the inner computations of the agents (e.g., plans or neuron activations), and a simulation-based approach, which involves unrolling the agent in a learned world model. Our results show that the plans of explicitly planning agents are significantly more informative for prediction than the neuron activations of the other types. Furthermore, using internal plans proves more robust to model quality compared to simulation-based approaches when predicting actions. These findings highlight the benefits of leveraging internal plans to predict future agent actions and events, thereby improving interaction and safety in real-world deployments.
Live content is unavailable. Log in and register to view live content