Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)

From Laws to Motivation: Guiding Exploration through Law-Based Intrinsic Reasoning and Rewards

Ziyu Chen · Zhiqing Xiao · Xinbei Jiang · Junbo Zhao

Keywords: [ Reward Design ] [ Reinforcement Learning ] [ Intrinsic Motivation ]


Abstract: Large Language Models (LLMs) and Reinforcement Learning (RL) are two powerful approaches for building autonomous agents. However, due to limited understanding of the game environment, agents often resort to inefficient exploration and trial-and-error, struggling to plan from a macro perspective. We propose a method that extracts experience from interaction records to model the underlying laws of the game environment, using these experience as intrinsic motivation to guide agents. These experience, expressed in language, are highly flexible and can either assist agents in reasoning directly or be transformed into rewards for guiding training. Our evaluation results in $\texttt{Crafter}$ demonstrate that both RL and LLM agents benefit from these experiences, leading to improved overall performance.

Chat is not available.