Poster
Composing graphical models with neural networks for structured representations and fast inference
Matthew Johnson · David Duvenaud · Alex Wiltschko · Ryan Adams · Sandeep R Datta
Area 5+6+7+8 #57
Keywords: [ Deep Learning or Neural Networks ] [ (Other) Unsupervised Learning Methods ] [ (Other) Probabilistic Models and Methods ] [ Variational Inference ] [ Nonlinear Dimension Reduction and Manifold Learning ] [ Graphical Models ]
We propose a general modeling and inference framework that combines the complementary strengths of probabilistic graphical models and deep learning methods. Our model family composes latent graphical models with neural network observation likelihoods. For inference, we use recognition networks to produce local evidence potentials, then combine them with the model distribution using efficient message-passing algorithms. All components are trained simultaneously with a single stochastic variational inference objective. We illustrate this framework by automatically segmenting and categorizing mouse behavior from raw depth video, and demonstrate several other example models.
Live content is unavailable. Log in and register to view live content