Skip to yearly menu bar Skip to main content


Poster

EDGE: Explaining Deep Reinforcement Learning Policies

Wenbo Guo · Xian Wu · Usmann Khan · Xinyu Xing

Keywords: [ Interpretability ] [ Generative Model ] [ Adversarial Robustness and Security ] [ Kernel Methods ] [ Reinforcement Learning and Planning ]


Abstract:

With the rapid development of deep reinforcement learning (DRL) techniques, there is an increasing need to understand and interpret DRL policies. While recent research has developed explanation methods to interpret how an agent determines its moves, they cannot capture the importance of actions/states to a game's final result. In this work, we propose a novel self-explainable model that augments a Gaussian process with a customized kernel function and an interpretable predictor. Together with the proposed model, we also develop a parameter learning procedure that leverages inducing points and variational inference to improve learning efficiency. Using our proposed model, we can predict an agent's final rewards from its game episodes and extract time step importance within episodes as strategy-level explanations for that agent. Through experiments on Atari and MuJoCo games, we verify the explanation fidelity of our method and demonstrate how to employ interpretation to understand agent behavior, discover policy vulnerabilities, remediate policy errors, and even defend against adversarial attacks.

Chat is not available.