Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

Structured In-Context Task Representations

Core Francisco Park · Andrew Lee · Ekdeep S Lubana · Kento Nishi · Maya Okawa · Hidenori Tanaka

Keywords: [ Representation Learning ] [ Representation Geometry ] [ Task Representation ] [ In-Context Learning ] [ Large Language Models ]


Abstract:

Representation learning has been central to deep learning’s evolution. While interpretable structures have been observed in pre-trained models’ representations, an important question arises: Do networks develop such interpretable structures during in-context learning? Using synthetic sequence data derived from underlying geometrically structured graphs (e.g., grids, rings), we provide affirmative evidence that language models develop internal representations mirroring these geometric structures during in-context learning. Furthermore, we demonstrate how in-context examples can override semantic priors by constructing a representation in dimensions other than the one used by the prior. Overall, our study demonstrates that models can form meaningful representations solely from in-context exemplars.

Chat is not available.