Poster
in
Workshop: Attributing Model Behavior at Scale (ATTRIB)
The Mutual Relationship between Corpus Frequency and Linear Representations in Language Models
Jack Merullo · Sarah Wiegreffe · Yanai Elazar
Pretraining data has a direct impact on the behaviors and quality of language models (LMs), but we only understand the most basic principles of this relationship. While most work focuses on pretraining data and downstream task behavior, we look at the effect on LM representations. Previous work has discovered highly interpretable \textbf{linear representations} of concepts in language models that allow for more controllable generation, but what leads to the formation of these representations during pretraining is largely unknown. We study the connection between differences in pretraining data frequency and differences in trained models' linear representations of factual recall relations. We find evidence that the two are directly linked, with the formation of linear representations strongly connected to pretraining term frequencies. First, we establish that the presence of linear representations for subject-relation-object-formatted facts is highly correlated with both subject-object co-occurrence frequency and in-context learning accuracy. This is the case across all phases of pretraining, i.e., it is not affected by the model's underlying capability. In OLMo 7B and GPT-J (6B), we discover that a linear representation reliably forms when the subjects and objects within a relation co-occur at least 1-2k times on average. Thus, it appears linear representations form as a result of consistent repeated occurrences, not due to emergence from lengthy pretraining time. In the OLMo 1B model, this only occurs after 4.4k occurrences. Finally, we train a regression model on measurements of linear representation robustness that can noisily predict how often a term was seen in pretraining, which generalizes to GPT-J without additional training, providing a new unsupervised method for theorizing about possible data sources of closed-source models. We conclude that the presence/absence of linear representations contain a weak but significant signal that reflects an imprint of the pretraining corpus across LMs.