Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)
Enhancing Memory Retention in Continual Model-Based Reinforcement Learning
Haotian Fu · Yixiang Sun · Michael Littman · George Konidaris
Keywords: [ intrinsic motivation ] [ Model-based reinforcement learning ] [ Continual learning ] [ catastrophic forgetting ]
We propose DRAGO, a novel approach for continual model-based reinforcement learning aiming to address catastrophic forgetting and improve the incremental development of world models across a sequence of tasks. DRAGO comprises two key components: Synthetic Experience Rehearsal, which leverages generative models to create synthetic experiences from past tasks, allowing the agent to reinforce previously learned dynamics without storing data, and Regaining Memories Through Exploration, which introduces an intrinsic reward mechanism to guide the agent toward revisiting relevant states from prior tasks. Together, these components enable the agent to maintain a comprehensive and continually developing world model, facilitating more effective learning and adaptation across diverse environments. Empirical evaluations demonstrate that DRAGO outperforms standard MBRL methods in preserving knowledge across tasks, achieving superior performance in various continual learning scenarios.