Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2024 Workshop: Machine Learning and the Physical Sciences

Amortizing intractable inference in diffusion models for Bayesian inverse problems

Siddarth Venkatraman · Moksh Jain · Luca Scimeca · Minsu Kim · Marcin Sendera · Mohsin Hasan · Luke Rowe · Sarthak Mittal · Pablo Lemos · Emmanuel Bengio · Alexandre Adam · Jarrid Rector-Brooks · Yashar Hezaveh · Laurence Perreault-Levasseur · Yoshua Bengio · Glen Berseth · Nikolay Malkin


Abstract: Diffusion models have emerged as effective distribution estimators but their use as priors in downstream tasks poses an intractable posterior inference problem. This paper studies amortized sampling of the posterior over data, $\mathbf{x}\sim p^{post}(\mathbf{x})\propto p(\mathbf{x})r(\mathbf{x})$, in a model that consists of a diffusion generative model prior $p(\mathbf{x})$ and a black-box constraint or likelihood function $r(\mathbf{x})$. Recent work introduced an asymptotically correct, and data-free learning objective: relative trajectory balance (RTB), for training a diffusion model to sample from this posterior, a problem that existing methods solve only approximately or in restricted cases. A particularly useful application of unbiased posterior inference is the Bayesian approach to scientific inverse problems such as gravitational lensing, which are otherwise ill-posed. We apply RTB to such tasks, and showcase its effectiveness on high dimensional Bayesian inverse problems with image data, including applications in classifier guidance, phase retrieval, and the astrophysics problem of gravitational lensing.

Chat is not available.