Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)

Model-Agnostic Meta-Learning with Open-Ended R

Aya Shabbar

Keywords: [ RL ] [ MAML ] [ Meta-Learning ] [ Open-Ended ]


Abstract:

This paper builds on the Open-Ended Reinforcement Learning with Neural Reward Functions proposed by Meier and Mujika [1] that uses reward functions encoded by neural networks. One key limitation of their paper is the necessity of re-learning for each new skill learned by the agent. Consequently, we propose integrating meta-learning algorithms to tackle this problem. We, therefore, study the use of MAML, Model-Agnostic Meta Learning that we believe could make policy learning more efficient. MAML operates by learning an initialization of the model parameters that can be fine-tuned with a small number of examples from a new task which allows for rapid adaptation to new tasks.

Chat is not available.