Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

An empirical study of in-context uncertainty quantification with conformal prediction

Zhe Huang · Simone Rossi · Rui Yuan · Thomas Hannagan

Keywords: [ In-Context Learning ] [ Conformal Prediction ] [ Uncertainty Quantification ]

[ ] [ Project Page ]
Sat 14 Dec 3:45 p.m. PST — 4:30 p.m. PST

Abstract:

Transformers are versatile, parallelizable models that also exhibit the remarkable capability of In-Context Learning (ICL).ICL allows models to adapt to new tasks by incorporating input-output examples in the prompt without modifying the model parameters.However, reliably quantifying the uncertainty of ICL predictions remains a challenge: traditional methods like ensembling and Bayesian inference do not scale well and they require specific assumptions (e.g. choice of the prior).Conformal Prediction is a known distribution-free method, which however remains computationally greedy as several models must be fitted to build confidence intervals.This paper combines ICL with Conformal Prediction (CP) to obtain a scalable, distribution-free uncertainty quantification method.On a linear regression task, our approach enables robust coverage probabilities, and significantly reduces computation time compared to traditional methods.Furthermore, we experimentally identify scaling laws for the quality of conformal prediction when applied to in-context linear regression, showing how best to allocate compute as a function of model parameters and training iterations.Our work represents a step towards practical uncertainty quantification for Large Language Models.

Chat is not available.