Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Estimating and Correcting for Misclassification Error in Empirical Textual Research

Jonathan Choi

Keywords: [ large language models ] [ M&A agreements ] [ attenuation bias ] [ misclassification error ] [ statistical correction ] [ logistic regression ] [ econometrics ] [ textual analysis ] [ bootstrap ] [ validation statistics ] [ legal scholarship ] [ contract evolution ] [ measurement error ] [ Supreme Court citations ] [ amplification bias ] [ empirical research ]

[ ] [ Project Page ]
Sat 14 Dec 3:45 p.m. PST — 4:30 p.m. PST

Abstract:

We present a framework for quantifying the impact of and correcting for misclassification error in empirical research involving textual data. Misclassification error commonly arises when, for example, large language models (LLMs) or human research assistants are tasked with classifying features in text. For statistics calculated with classification estimates, misclassification error may introduce attenuation bias from noise, directional bias from an imbalance of false positives and false negatives, or both. We present strategies for statistically quantifying misclassification error and for correcting estimations based on mismeasured data. We demonstrate the effectiveness of these techniques with Monte Carlo simulations as well as two worked examples involving real data from LLM classifications. The examples demonstrate the importance of correcting for measurement error, particularly when using LLMs with imbalances in their confusion matrices.

Chat is not available.