Compositional Visual Generation with Energy Based Models
Yilun Du, Shuang Li, Igor Mordatch
Spotlight presentation: Orals & Spotlights Track 13: Deep Learning/Theory
on 2020-12-08T19:20:00-08:00 - 2020-12-08T19:30:00-08:00
on 2020-12-08T19:20:00-08:00 - 2020-12-08T19:30:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given a distribution for smiling faces, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts. We evaluate compositional generation abilities of our model on the CelebA dataset of natural faces and synthetic 3D scene images. We also demonstrate other unique advantages of our model, such as the ability to continually learn and incorporate new concepts, or infer compositions of concept properties underlying an image.