Skip to yearly menu bar Skip to main content


Poster

Learning feed-forward one-shot learners

Luca Bertinetto · João Henriques · Jack Valmadre · Philip Torr · Andrea Vedaldi

Area 5+6+7+8 #62

Keywords: [ Deep Learning or Neural Networks ] [ (Application) Computer Vision ] [ Similarity and Distance Learning ]


Abstract:

One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.

Live content is unavailable. Log in and register to view live content