Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 5th Workshop on Self-Supervised Learning: Theory and Practice

When Do We Not Need Larger Vision Models?

Baifeng Shi · Ziyang Wu · Maolin Mao · Xin Wang · Trevor Darrell


Abstract: Scaling up the size of vision models has been the de facto standard to obtain more powerful visual representations. In this work, we discuss the point beyond which larger vision models are not necessary. We demonstrate the power of Scaling on Scales (S$^2$), whereby a pre-trained and frozen smaller vision model (e.g., ViT-B or ViT-L), run over multiple image scales, can outperform larger models (e.g., ViT-H or ViT-G) on classification, segmentation, depth estimation, Multimodal LLM (MLLM) benchmarks, and robotic manipulation. We further show that features of larger vision models can be well approximated by those of multi-scale smaller models through a linear transform, which suggests a multi-scale smaller model has comparable learning capacity to a larger model.

Chat is not available.