Skip to yearly menu bar Skip to main content


Poster

Unrolled denoising networks that provably learn to perform optimal Bayesian inference

Aayush Karan · Kulin Shah · Sitan Chen · Yonina Eldar

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Much of Bayesian inference centers around the design of estimators for inverse problems which are optimal assuming the data comes from a certain prior. Where does this prior come from, and what do these optimality guarantees mean if we don't know the prior? In recent years, algorithm unrolling has emerged as deep learning's answer to this age-old question: design a neural network whose layers can in principle simulate iterations of these ``optimal'' inference algorithms, and train the network on data generated by the unknown prior. In practice this method, which is inherently prior-agnostic, performs at least as well and often better than the best hand-crafted, prior-aware algorithms.Despite these empirical successes, it has remained unclear theoretically why this method works well, or what estimators these networks converge to. In this work, we prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP). For compressed sensing, we prove that when trained on data coming from a product prior, the layers of the network approximately converge to the same denoisers used in Bayes AMP. We also provide extensive numerical experiments demonstrating the advantages of our unrolled architecture--- in addition to being able to obliviously adapt to general priors, it can handle more general (e.g. non-sub-Gaussian) measurements and exhibits non-asymptotic improvements over the MSE achieved by Bayes AMP.

Live content is unavailable. Log in and register to view live content