Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)

Incentivizing Exploration With Causal Curiosity as Intrinsic Motivation

Elias DURAND · Mehdi Khamassi

Keywords: [ Causal Reinforcement Learning ] [ Intrinsic Motivation ] [ Causal Inference ] [ Model-Based Reinforcement Learning ] [ Causal Curiosity ] [ Causal Model Learning ]


Abstract:

Reinforcement learning (RL) has demonstrated remarkable success in decision-making tasks, yet often lacks the ability to decipher and leverage causal relationships in complex environments. This paper introduces a novel ``causal model-based reinforcement learning agent'' that integrates causal inference with model-based RL to enhance exploration and decision-making. Our approach incorporates an intrinsic motivation mechanism based on causal curiosity, quantified by the changes in the agent's internal causal model. We present an algorithm that maintains separate value functions for extrinsic rewards and intrinsic causal discovery, allowing for a balanced exploration of both task-oriented goals and causal structures. Theoretical analysis suggests convergence properties under certain conditions, while empirical results on a blackjack task and structural causal model environments demonstrate improved learning efficiency and strategic decision-making compared to standard RL. This work contributes to bridging the gap between reinforcement learning and causal inference.

Chat is not available.