Poster
in
Workshop: NeuroAI: Fusing Neuroscience and AI for Intelligent Solutions
Predictive Learning Induces Probabilistic Cognitive Maps
Yeowon Kim · Yul HR Kang
Navigation requires inferring one’s pose (location and heading) in an environment based on noisy and ambiguous egocentric sensory inputs. While place cells in the brain are thought to represent an animal's allocentric location and associated uncertainty, the mechanisms by which these probabilistic representations are learned remain unclear. To address this, we developed a model of an agent that navigates using noisy egocentric visual and self-motion signals. We demonstrate that, when the agent is trained to predict future visual stimuli, its hidden representations closely resemble the posterior belief about pose, as computed by a Bayesian ideal observer. Moreover, these hidden representations, like the posterior beliefs of the ideal observer, also resembled place cell activity both in familiar and unfamiliar environments. This resemblance was significantly weaker when the agent was trained as an autoencoder to reproduce its current visual input. Our findings suggest that learning to predict noisy sensory inputs can give rise to probabilistic cognitive maps---probabilistic representations of latent states such as pose---which are essential for Bayesian inference in the brain.