Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Learning to Localize: Practical Algorithms for Online Weighted Conformal Prediction

Tiffany Ding · Anastasios Angelopoulos · Michael Jordan · Ryan Tibshirani

Keywords: [ online learning ] [ conformal prediction ] [ weighting ] [ uncertainty quantification ]

[ ] [ Project Page ]
Sat 14 Dec noon PST — 12:45 p.m. PST

Abstract:

We propose a method for performing uncertainty quantification with guarantees in online settings with arbitrary distribution shifts by leveraging the framework of weighted conformal prediction. Previous work on conformal prediction in the online adversarial setting focus on achieving marginal coverage under the assumption of full feedback (i.e., labels are observed every time step); we go beyond this goal and additionally aim for set sizes that are adaptive to test-point difficulty and coverage under intermittent feedback (i.e., labels for some time steps are never observed). The key idea of our method is to define a localizer that assigns weights to calibration examples that are decreasing in their distance to the test point (measured in an embedded space), then adaptively adjust the localizer bandwidth as we observe feedback. This yields an assumption-free guarantee of marginal coverage while also displaying good set size adaptivity, even under intermittent feedback, when distribution shifts are less adversarial. We validate our method empirically on several synthetic and real datasets.

Chat is not available.