Skip to yearly menu bar Skip to main content


Poster

Learning from Offline Foundation Features with Tensor Augmentations

Emir Konuk · Christos Matsoukas · Moein Sorkhei · Phitchapha Lertsiravarameth · Kevin Smith

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract: We introduce Learning from Offline Foundation Features with Tensor Augmentations (LOFF-TA), an efficient training scheme designed to harness the capabilities of foundation models in limited resource settings where their direct development is not feasible. LOFF-TA involves training a compact classifier on cached feature embeddings from a frozen foundation model, resulting in up to $37\times$ faster training and up to $26\times$ reduced GPU memory usage. Because the embeddings of augmented images would be too numerous to store, yet the augmentation process is essential for training, we propose to apply tensor augmentations to the cached embeddings of the original non-augmented images. LOFF-TA makes it possible to leverage the power of foundation models, regardless of their size, in settings with limited computational capacity. Moreover, LOFF-TA can be used to apply foundation models to high-resolution images without increasing compute. In certain scenarios, we find that training with LOFF-TA yields better results than directly fine-tuning the foundation model.

Live content is unavailable. Log in and register to view live content