Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)
Toward Universal and Interpretable World Models for Open-ended Learning Agents
Lancelot Da Costa
Keywords: [ world model ] [ representation learning ] [ biomimetic ] [ Bayesian ] [ agent ]
We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents. This is a sparse class of Bayesian networks capable of approximating a broad range of stochastic processes, which provide agents with the ability to learn world models in a manner that may be both interpretable and computationally scalable. This approach integrating Bayesian structure learning and intrinsically motivated (model-based) planning enables agents to actively develop and refine their world models, which may lead to open-ended learning and more robust, adaptive behavior.