Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

What Representational Similarity Measures Imply about Decodable Information

Sarah Harvey · David Lipshutz · Alex Williams

Keywords: [ shape metrics ] [ representational similarity measures ] [ decodable information ] [ dissimilarity metrics ] [ representational geometry ] [ linear regression ]

[ ] [ Project Page ]
 
presentation: UniReps: Unifying Representations in Neural Models
Sat 14 Dec 8:15 a.m. PST — 5:30 p.m. PST

Abstract:

Neural responses encode information that is useful for a variety of downstream tasks. A common approach to understand these systems is to build regression models or “decoders” that reconstruct features of the stimulus from neural responses. Here, we investigate how to leverage this perspective to quantify the similarity of different neural systems. This is distinct from typical motivations behind neural network similarity measures like centered kernel alignment (CKA), canonical correlation analysis (CCA), and Procrustes shape distance, which highlight geometric intuition and invariances to orthogonal or affine transformations. We show that CKA, CCA, and other measures can be equivalently motivated from similarity in decoding patterns. Specifically, these measures quantify the average alignment between optimal linear readouts across a distribution of decoding tasks. We also show that the Procrustes shape distance upper bounds the distance between optimal linear readouts and that the converse holds for representations with low participation ratio. Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information. This perspective suggests new ways of measuring similarity between neural systems and also provides novel, unifying interpretations of existing measures.

Chat is not available.