Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models
Diffusion-Powered Image Super-Resolution That You Can Actually Trust
Daniel Csillag · Eduardo Adame · Guilherme Tegoni Goedert
Keywords: [ conformal prediction ] [ foundation image models ] [ uncertainty quantification ] [ diffusion models ] [ generative models ] [ image super-resolution ]
The increasing use of generative ML foundation models for image super-resolution calls for robust and interpretable uncertainty quantification methods. We address this need by presenting a novel approach based on Conformal Prediction to create a `confidence mask' capable of reliably and intuitively communicating where the generated image can be trusted. Our method allows for ample customization via the choice of local image similarity metric. Furthermore, it is adaptable to any black-box diffusion model, including models locked behind an opaque API, and needs only easily attainable unlabeled data for calibration. We prove strong theoretical guarantees for our method that span fidelity error control, reconstruction quality, and robustness in the face of data leakage. Finally, we empirically evaluate these results and establish our method's solid performance.