Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)
Variational Learned Priors for Intrinsic Motivation
Jessica Nicholson · Joseph Goodier · Akshil Patel · Özgür Şimşek
Keywords: [ intrinsic motivation ] [ reinforcement learning ] [ exploration ]
Efficient exploration in reinforcement learning is challenging, especially in sparse-reward environments. Intrinsic motivation, such as rewarding state novelty, can enhance exploration. We propose an intrinsic motivation approach, called Variational Learned Priors, that uses variational state encoding to estimate novelty via the Kullback-Leibler divergence between the posterior distribution and a learned prior of a Variational Autoencoder. We assess this intrinsic reward with four different learned priors. Our results show that this method improves exploration efficiency and accelerates extrinsic reward accumulation across various domains.