Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generative AI and Creativity: A dialogue between machine learning researchers and creative professionals

Denoising Monte Carlo Renders with Diffusion Models

Vaibhav Vavilala · David Forsyth · Rahul Vasanth

[ ]
[ Poster
Sat 14 Dec 1 p.m. PST — 2 p.m. PST

Abstract:

Physically-based rendering lies at the core of a number of creative applications in gaming, cinema, and design. Unfortunately, these synthetic images often contain unwanted Monte Carlo noise, with variance that increases as the number of rays per pixel decreases. This noise, while zero-mean for good modern renderers, can have heavy tails (most notably, for scenes containing specular or refractive objects). Learned methods for restoring low fidelity renders are highly developed, because suppressing render noise means one can save compute. We demonstrate that a diffusion model can denoise low fidelity renders successfully. Furthermore, our method can be conditioned on a variety of natural render information, and this conditioning helps performance. Quantitative experiments show that our method is competitive with SOTA across a range of sampling rates. Qualitative examination of the reconstructions suggests that the image prior applied by a diffusion method strongly favors reconstructions that are “like” real images – so have straight shadow boundaries, curved specularities and no “fireflies.”

Chat is not available.