Poster
in
Workshop: NeuroAI: Fusing Neuroscience and AI for Intelligent Solutions
What should a neuron aim for? Designing local objective functions based on information theory
Andreas Schneider · Valentin Neuhaus · David A. Ehrlich · Alexander Ecker · Abdullah Makkeh · Viola Priesemann · Michael Wibral
In modern deep neural networks, the learning dynamics of the individual neurons is often obscure, as the networks are trained via global optimization. Conversely, biological systems build on self-organized, local learning, achieving robustness and efficiency with limited global information. We here show how to enhance the interpretability of the individual artificial neurons' function by developing a local learning framework similar to that of biological neurons. The local objective function is parameterized using a recent extension of information theory - Partial Information Decomposition (PID) - which decomposes the information that a set of information sources holds about an outcome into unique, redundant and synergistic contributions. Our framework enables neurons to locally shape the integration of information from various input classes by selecting which of the inputs should contribute uniquely, redundantly or synergistically to the output. This selection is expressed as a learning goal for an individual neuron, which can be directly derived from intuitive reasoning or via numerical optimization, offering a window into task-relevant local information processing. Achieving performance on par with backpropagation while preserving neuron-level interpretability, our work advances a principled information-theoretic foundation for local learning strategies.