Skip to yearly menu bar Skip to main content


Poster

Stacked Capsule Autoencoders

Adam Kosiorek · Sara Sabour · Yee Whye Teh · Geoffrey E Hinton

East Exhibition Hall B, C #36

Keywords: [ Representation Learning ] [ Algorithms ] [ Algorithms -> Unsupervised Learning; Applications -> Computer Vision; Applications -> Object Recognition; Deep Learning ] [ Atte ]


Abstract:

Objects are composed of a set of geometrically organized parts. We introduce an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects. Since these relationships do not depend on the viewpoint, our model is robust to viewpoint changes. SCAE consists of two stages. In the first stage, the model predicts presences and poses of part templates directly from the image and tries to reconstruct the image by appropriately arranging the templates. In the second stage, the SCAE predicts parameters of a few object capsules, which are then used to reconstruct part poses. Inference in this model is amortized and performed by off-the-shelf neural encoders, unlike in previous capsule networks. We find that object capsule presences are highly informative of the object class, which leads to state-of-the-art results for unsupervised classification on SVHN (55%) and MNIST (98.7%).

Live content is unavailable. Log in and register to view live content