Skip to yearly menu bar Skip to main content


Oral
in
Workshop: UniReps: Unifying Representations in Neural Models

Evidence from fMRI Supports a Two-Phase Abstraction Process in Language Models

Richard Antonello · Emily Cheng

Keywords: [ brain-LM similarity ] [ encoding models ] [ geometry ]

[ ] [ Project Page ]
 
presentation: UniReps: Unifying Representations in Neural Models
Sat 14 Dec 8:15 a.m. PST — 5:30 p.m. PST

Abstract:

Research has repeatedly demonstrated that intermediate hidden states extracted from large language models predict measured brain response to natural language stimuli. Yet, very little is known about the representation properties that enable this high prediction performance. Why is it the intermediate layers, and not the output layers, that are most capable for this unique and highly general transfer task? In this work, we show that evidence from language encoding models in fMRI supports the existence of a two-phase abstraction process within LLMs. We use geometric methods to show that this abstraction process naturally arises over the course of training a language model and that the first "composition" phase of this abstraction process is compressed into fewer layers as training continues. Finally, we demonstrate a strong correspondence between layerwise encoding performance and the intrinsic dimensionality of representations from LLMs. We give initial evidence that this correspondence primarily derives from the inherent compositionality of LLMs and not their next-word prediction properties.

Chat is not available.