Skip to yearly menu bar Skip to main content


Poster

Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse

James Lucas · George Tucker · Roger Grosse · Mohammad Norouzi

East Exhibition Hall B, C #123

Keywords: [ Generative Models ] [ Deep Learning ] [ Latent Variable Models ] [ Probabilistic Methods; Probabilistic Methods ]


Abstract:

Posterior collapse in Variational Autoencoders (VAEs) with uninformative priors arises when the variational posterior distribution closely matches the prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA). We explain how posterior collapse may occur in pPCA due to local maxima in the log marginal likelihood. Unexpectedly, we prove that the ELBO objective for the linear VAE does not introduce additional spurious local maxima relative to log marginal likelihood. We show further that training a linear VAE with exact variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions. Empirically, we find that our linear analysis is predictive even for high-capacity, non-linear VAEs and helps explain the relationship between the observation noise, local maxima, and posterior collapse in deep Gaussian VAEs.

Live content is unavailable. Log in and register to view live content