Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability

Learning Robust Representations for Transfer in Reinforcement Learning

Faisal Ahmed Abdelrahman Mohamed · Roger Creus Castanyer · Hongyao Tang · Zahra Sheikhbahaee · Glen Berseth


Abstract:

Learning transferable representations for deep reinforcement learning (RL) is a challenging problem due to the inherent non-stationarity, distribution shift, and unstable training dynamics. To be useful, a transferable representation needs to be robust to such factors. In this work, we introduce a new architecture and training strategy for learning robust representations for transfer learning in RL. We propose leveraging multiple CNN encoders and training them not to specialize in areas of the state space but instead to match each other's representation. We find that learned representations transfer well across many Atari tasks, resulting in better transfer learning performance and data efficiency than training from scratch.

Chat is not available.