Poster
Generalization Gap in Amortized Inference
Mingtian Zhang · Peter Hayes · David Barber
Hall J (level 1) #113
Keywords: [ Variational Inference ] [ VAE ] [ lossless compression ] [ amortized inference ]
The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression. In this work, we study the generalization of a popular class of probabilistic model - the Variational Auto-Encoder (VAE). We discuss the two generalization gaps that affect VAEs and show that overfitting is usually dominated by amortized inference. Based on this observation, we propose a new training objective that improves the generalization of amortized inference. We demonstrate how our method can improve performance in the context of image modeling and lossless compression.