Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

Constrained Belief Updating and Geometric Structures in Transformer Representations

Mateusz Piotrowski · Paul Riechers · Daniel Filan · Adam Shai

Keywords: [ belief state geometry ] [ mechanistic interpretability ] [ computational mechanics ]


Abstract:

How do transformers trained on next-token prediction represent their inputs?Our analysis reveals that in simple settings, transformers form intermediate representations with fractal structures distinct from, yet closely related to, the geometry of belief states of an optimal predictor.We find the algorithmic process by which these representations form and connect this mechanism to constrained belief updating equations, offering insight into the geometric meaning of these fractals.These findings bridge the gap between the model-agnostic theory of belief state geometry and the specific architectural constraints of transformers.

Chat is not available.