Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Uncertainty Quantification for Inverse Problems with Generative Priors under Distribution Shift

Sara Fridovich-Keil

Keywords: [ inverse problems ] [ distribution shift ] [ data-driven priors ] [ uncertainty quantification ]

[ ] [ Project Page ]
Sat 14 Dec 3:45 p.m. PST — 4:30 p.m. PST

Abstract:

Generative models have shown strong potential for use as data-driven priors in solving inverse problems, such as reconstructing medical images from undersampled measurements. Although these data-driven priors can improve reconstruction quality while reducing the number of required measurements, they also introduce the possibility of hallucination in cases where the image to be reconstructed falls outside the distribution of images and measurements used to train the data-driven prior. Existing approaches to uncertainty quantification in this setting require an in-distribution calibration dataset, which may not be readily available, or provide heuristic rather than statistical uncertainty estimates, or quantify uncertainty arising from model overparameterization or limited measurements rather than uncertainty arising from distribution shift. In this extended abstract we highlight the need for instance-level uncertainty quantification in the presence of distribution shift and propose strategies to provide it.

Chat is not available.