Skip to yearly menu bar Skip to main content


Poster

Hypothesis Testing the Circuit Hypothesis in LLMs

Claudia Shi · Nicolas Beltran Velez · Achille Nazaret · Carolina Zheng · AdriĆ  Garriga-Alonso · Andrew Jesson · Maggie Makar · David Blei

[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Large language models (LLMs) demonstrate surprising capabilities, but we do not understand how they are implemented. One hypothesis suggests that these capabilities are primarily executed by small subnetworks within the LLM, known as circuits. But how can we evaluate this hypothesis?In this paper, we formalize a set of criteria that a circuit is hypothesized to meet and develop a suite of hypothesis tests to evaluate how well circuits satisfy them. The criteria focus on the extent to which the LLM's behavior is preserved, the degree of localization of this behavior, and whether the circuit is minimal.We apply these tests to six circuits described in the research literature. We find that synthetic circuits -- circuits that are hard-coded in the model -- align with the idealized properties. Circuits discovered in Transformer models satisfy the criteria to varying degrees.

Live content is unavailable. Log in and register to view live content