Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)

Diversity Progress for Goal Selection in Discriminability-Motivated RL

Erik M. Lintunen · Nadia Ady · Christian Guckelsberger

Keywords: [ open-ended learning ] [ skill discovery ] [ intrinsically-motivated reinforcement learning ] [ learning progress ] [ automatic curriculum learning ] [ autotelic agents ] [ intrinsically-motivated goal-exploration processes ] [ diversity-seeking agents ]


Abstract:

Non-uniform goal selection has the potential to improve the reinforcement learning (RL) of skills over uniform-random selection. In this paper, we introduce a method for learning a goal-selection policy in intrinsically-motivated goal-conditioned RL: ``Diversity Progress'' (DP). The learner forms a curriculum based on observed improvement in discriminability over its set of goals. Our proposed method is applicable to the class of discriminability-motivated agents, where the intrinsic reward is computed as a function of the agent's certainty of following the true goal being pursued. This reward can motivate the agent to learn a set of diverse skills without extrinsic rewards. We demonstrate empirically that a DP-motivated agent can learn a set of distinguishable skills faster than previous approaches, and do so without suffering from a collapse of the goal distribution---a known issue with some prior approaches. We end with plans to take this proof-of-concept forward.

Chat is not available.