Workshop: Workshop on Deep Learning and Inverse Problems
Reinhard Heckel, Paul Hand, Richard Baraniuk, Lenka Zdeborová, Soheil Feizi
2020-12-11T07:30:00-08:00 - 2020-12-11T16:00:00-08:00
Abstract: Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.
The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.
This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond. NeurIPS, with its visibility and attendance by experts in machine learning, offers the ideal frame for this exchange of ideas. We will use this virtual format to make this topic accessible to a broader audience than the in-person meeting is able to as described below.
The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.
This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond. NeurIPS, with its visibility and attendance by experts in machine learning, offers the ideal frame for this exchange of ideas. We will use this virtual format to make this topic accessible to a broader audience than the in-person meeting is able to as described below.
Video
Chat
Chat is not available.
Schedule
2020-12-11T07:30:00-08:00 - 2020-12-11T07:55:00-08:00
Newcomer presentation
Reinhard Heckel, Paul Hand
This session consists of a 15-minute talk and a 10 minute Q/A geared toward newcomers to the field, introducing them to the major questions and approaches related to deep learning and inverse problems.
2020-12-11T07:55:00-08:00 - 2020-12-11T08:00:00-08:00
Opening Remarks
Reinhard Heckel, Paul Hand, Soheil Feizi, Lenka Zdeborová, Richard Baraniuk
2020-12-11T08:00:00-08:00 - 2020-12-11T08:30:00-08:00
Victor Lempitsky - Generative Models for Landscapes and Avatars
Victor Lempitsky
2020-12-11T08:30:00-08:00 - 2020-12-11T09:00:00-08:00
Thomas Pock - Variational Networks
Thomas Pock
2020-12-11T09:00:00-08:00 - 2020-12-11T09:15:00-08:00
Risk Quantification in Deep MRI Reconstruction
Vineet Edupuganti
Reliable medical image recovery is crucial for accurate patient diagnoses, but little prior work has centered on quantifying uncertainty when using non-transparent deep learning approaches to reconstruct high-quality images from limited measured data. In this study, we develop methods to address these concerns, utilizing a VAE as a probabilistic recovery algorithm for pediatric knee MR imaging. Through our use of SURE, which examines the end-to-end network Jacobian, we demonstrate a new and rigorous metric for assessing risk in medical image recovery that applies universally across model architectures.
2020-12-11T09:15:00-08:00 - 2020-12-11T09:30:00-08:00
GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images
Sungmin Cha
We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image. In such a setting, which often occurs in practice, it is not possible to train a denoiser with the standard discriminative training or with the recently developed Noise2Noise (N2N) training; the former requires the underlying clean image for the given noisy image, and the latter requires two independently realized noisy image pair for a clean image. To that end, we propose GAN2GAN (Generated-Artificial-Noise to Generated-Artificial-Noise) method that first learns a generative model that can 1) simulate the noise in the given noisy images and 2) generate a rough, noisy estimates of the clean images, then 3) iteratively trains a denoiser with subsequently synthesized noisy image pairs (as in N2N), obtained from the generative model. In results, we show the denoiser trained with our GAN2GAN achieves an impressive denoising performance on both synthetic and real-world datasets for the blind denoising setting.
2020-12-11T09:30:00-08:00 - 2020-12-11T10:00:00-08:00
Discussion
Visit the Gather.town to discuss with speakers and other attendees.
2020-12-11T10:00:00-08:00 - 2020-12-11T10:30:00-08:00
Rebecca Willett - Model Adaptation for Inverse Problems in Imaging
Rebecca Willett
2020-12-11T10:30:00-08:00 - 2020-12-11T11:00:00-08:00
Stefano Emron - Generative Modeling via Denoising
Stefano Ermon
2020-12-11T11:00:00-08:00 - 2020-12-11T11:15:00-08:00
Compressed Sensing with Approximate Priors via Conditional Resampling
Ajil Jalal
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and \emph{any} prior distribution on the signal, that the conditional resampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the conditional resampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
2020-12-11T11:15:00-08:00 - 2020-12-11T11:30:00-08:00
Chris Metzler - Approximate Message Passing (AMP) Algorithms for Computational Imaging
Christopher A Metzler
2020-12-11T11:30:00-08:00 - 2020-12-11T12:00:00-08:00
Discussion
Visit the Gather.town to discuss with speakers and other attendees.
2020-12-11T13:00:00-08:00 - 2020-12-11T14:00:00-08:00
Poster Session
Visit the gather.town to see the posters.
2020-12-11T14:00:00-08:00 - 2020-12-11T14:30:00-08:00
Peyman Milanfar - Denoising as Building Block Theory and Applications
Peyman Milanfar
2020-12-11T15:00:00-08:00 - 2020-12-11T15:30:00-08:00
Larry Zitnick - fastMRI
Larry Zitnick