Skip to yearly menu bar Skip to main content


Poster

SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies

Kamyar Ghasemipour · Shixiang (Shane) Gu · Richard Zemel

East Exhibition Hall B, C #212

Keywords: [ Algorithms ] [ Meta-Learning ] [ Algorithms -> Few-Shot Learning; Reinforcement Learning and Planning ] [ Decision and Control; Reinforcement Learning and Planni ]


Abstract:

Imitation Learning (IL) has been successfully applied to complex sequential decision-making problems where standard Reinforcement Learning (RL) algorithms fail. A number of recent methods extend IL to few-shot learning scenarios, where a meta-trained policy learns to quickly master new tasks using limited demonstrations. However, although Inverse Reinforcement Learning (IRL) often outperforms Behavioral Cloning (BC) in terms of imitation quality, most of these approaches build on BC due to its simple optimization objective. In this work, we propose SMILe, a scalable framework for Meta Inverse Reinforcement Learning (Meta-IRL) based on maximum entropy IRL, which can learn high-quality policies from few demonstrations. We examine the efficacy of our method on a variety of high-dimensional simulated continuous control tasks and observe that SMILe significantly outperforms Meta-BC. Furthermore, we observe that SMILe performs comparably or outperforms Meta-DAgger, while being applicable in the state-only setting and not requiring online experts. To our knowledge, our approach is the first efficient method for Meta-IRL that scales to the function approximator setting. For datasets and reproducing results please refer to https://github.com/KamyarGh/rlswiss/blob/master/reproducing/smilepaper.md .

Live content is unavailable. Log in and register to view live content