Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe Generative AI

Inference, Fast and Slow: Reinterpreting VAEs for OOD Detection

Sicong (Sheldon) Huang · Jiawei He · Kry Yik Chau Lui


Abstract:

Although likelihood-based methods are theoretically appealing, deep generative models (DGMs) often produce unreliable likelihood estimates in practice, particularly for out-of-distribution (OOD) detection. We reinterpret variational autoencoders (VAEs) through the lens of fast and slow weights. Our approach is guided by the proposed Likelihood Path (LPath) Principle, which extends the classical likelihood principle. A critical decision in our method is the selection of statistics for classical density estimation algorithms. The sweet spot should contain just enough information that’s sufficient for OOD detection but not too much to suffer from the curse of dimensionality. Our LPath principle achieves this by selecting the sufficient statistics that form the "path" toward the likelihood. We demonstrate that this likelihood path leads to SOTA OOD detection performance, even when the likelihood itself is unreliable.

Chat is not available.