Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)
Testing causal hypotheses through Hierarchical Reinforcement Learning
Anthony GX-Chen · Dongyan Lin · Mandana Samiei
Keywords: [ exploration ] [ causality ] [ hierarchical RL ] [ children ] [ agent as scientist ]
A goal of AI research is to develop agentic systems capable of operating in open-ended environments with the autonomy and adaptability akin to a scientist in the world of research---generating hypothesis, empirically testing them, and drawing conclusions about how the world works. We propose Structural Causal Models (SCMs) as a formalization of the space of hypothesis, and hierarchical reinforcement learning (HRL) as a key ingredient to building agents that can systematically discover the correct SCM. This provides a framework towards constructing agent behavior that generates and tests hypothesis to enables transferable learning of the world. Finally, we discuss practical implementation strategies.