Poster
Towards Interpretable Reinforcement Learning Using Attention Augmented Agents
Alexander Mott · Daniel Zoran · Mike Chrzanowski · Daan Wierstra · Danilo Jimenez Rezende
East Exhibition Hall B, C #235
Keywords: [ Attention Models ] [ Deep Learning ] [ Deep Learning -> Visualization or Exposition Techniques for Deep Networks; Reinforcement Learning and Planning ] [ Reinforcement ]
Inspired by recent work in attention models for image captioning and question answering, we present a soft attention model for the reinforcement learning domain. This model bottlenecks the view of an agent by a soft, top-down attention mechanism, forcing the agent to focus on task-relevant information by sequentially querying its view of the environment. The output of the attention mechanism allows direct observation of the information used by the agent to select its actions, enabling easier interpretation of this model than of traditional models. We analyze the different strategies the agents learn and show that a handful of strategies arise repeatedly across different games. We also show that the model learns to query separately about space and content (where'' vs.
what'').
We demonstrate that an agent using this mechanism can achieve performance competitive with state-of-the-art models on ATARI tasks while still being interpretable.
Live content is unavailable. Log in and register to view live content