Spotlight
in
Workshop: Information-Theoretic Principles in Cognitive Systems (InfoCog)
Active Vision with Predictive Coding and Uncertainty Minimization
Abdelrahman Sharafeldin · Nabil Imam · Hannah Choi
We present an end-to-end procedure for embodied visual exploration based on two biologically inspired computations: predictive coding and uncertainty minimization. The procedure can be applied in a task-independent and intrinsically driven manner. We evaluate our approach on an active vision task, where an agent must actively sample its visual environment to gather information. We show that our model is able to build unsupervised representations that allow it to actively sample and efficiently categorize sensory scenes. We further show that using these representations as input for downstream classification leads to superior data efficiency and learning speed compared to other baselines, while also maintaining lower parameter complexity. Finally, the modularity of our model allows us to analyze its internal mechanisms and to draw insight into the interactions between perception and action during exploratory behavior.