Skip to yearly menu bar Skip to main content


Spotlight Poster

Moving Off-the-Grid: Scene-Grounded Video Representations

Sjoerd van Steenkiste · Daniel Zoran · Yi Yang · Yulia Rubanova · Rishabh Kabra · Carl Doersch · Dilara Gokay · joseph heyward · Etienne Pot · Klaus Greff · Drew Hudson · Thomas Keck · Joao Carreira · Alexey Dosovitskiy · Mehdi S. M. Sajjadi · Thomas Kipf

[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Current vision models typically maintain a fixed correspondence between their representation structure and image space.Each layer comprises a set of tokens arranged “on-the-grid,” which biases patches or tokens to encode information at a specific spatio(-temporal) location. In this work we present Moving Off-the-Grid (MooG), a self-supervised video representation model that offers an alternative approach, allowing tokens to move “off-the-grid” to better enable them to represent scene elements consistently, even as they move across the image plane through time. By using a combination of cross-attention and positional embeddings we disentangle the representation structure and image structure. We find that a simple self-supervised objective—next frame prediction—trained on video data, results in a set of latent tokens which bind to specific scene structures and track them as they move. We demonstrate the usefulness of MooG’s learned representation both qualitatively and quantitatively by training readouts on top of the learned representation on a variety of downstream tasks. We show that MooG can provide a strong foundation for different vision tasks when compared to “on-the-grid” baselines.

Live content is unavailable. Log in and register to view live content