Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

Does Maximizing Neural Regression Scores Teach Us About The Brain?

Rylan Schaeffer · Mikail Khona · Sarthak Chandra · Mitchell Ostrow · Brando Miranda · Sanmi Koyejo

Keywords: [ neural network models of the brain ] [ neural alignment ] [ similarity metrics ] [ Computational neuroscience ] [ Brainscore ] [ neural regressions methodology ]


Abstract:

A prominent methodology in computational neuroscience posits that the brain can be understood by identifying which artificial neural network models most accurately predict biological neural activations, measured according to regression test error or other similar metrics. In this opinion piece, we argue that this methodology has become overused, and a more pluralistic approach is needed. Our view is that the field lacks a canonical definition of model goodness, and rather than engaging with this difficult question, the neural regressions methodology simply asserted a proxy -- neural predictivity -- then overfit to this proxy. We begin with an egregious failure of the neural regressions methodology in which the best fitting models disagreed with key properties of the neural circuit. Next, we highlight converging empirical and mathematical evidence that explains the disconnect: (linear) neural regressions are simply discovering the implicit biases of (linear) regression, which may not appropriately identify models that are actually brain-like. This is an instance of Goodhart's law: by selecting neural network models that optimize (linear) neural predictivity, the field's results have devolved into re-discovering general properties of (linear) regression, rather than furthering our understanding of the brain. These insights suggest that the neural regressions methodology may be insufficient for understanding the brain, and we call for a critical reevaluation of this methodology in computational neuroscience.

Chat is not available.