Skip to yearly menu bar Skip to main content


Poster

Diffusing Differentiable Representations

Yash Savani · Marc Finzi · J. Zico Kolter

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We introduce a novel, training-free method to sample through differentiable functions using pretrained diffusion models. Rather than merely mode finding, our method achieves sampling by pulling back the dynamics of the reverse time process from the image space to the parameter space and updating the parameters according to this pulled-back process. We identify an implicit constraint on the samples from the forward process and demonstrate that addressing this constraint improves the consistency and detail of the generated objects. Our method yields significant improvements in both the quality and diversity of generated implicit neural representations for images, panoramas, and 3D NeRFs compared to existing techniques. The proposed method can generalize to a wide range of differentiable representations, expanding the scope of problems that diffusion models can be applied to.

Live content is unavailable. Log in and register to view live content