Skip to yearly menu bar Skip to main content


Oral Poster

Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs

Shengbang Tong · Ellis Brown · Penghao Wu · Sanghyun Woo · Adithya Jairam Vedagiri IYER · Sai Charitha Akula · Shusheng Yang · Jihan Yang · Manoj Middepogu · Ziteng Wang · Xichen Pan · Rob Fergus · Yann LeCun · Saining Xie

[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 5C
Fri 13 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hinders accurate sensory grounding in real-world scenarios. Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architectures—self-supervised, strongly supervised, or combinations thereof—based on experiments with over 15 vision models. We critically examine existing MLLM benchmarks, addressing the difficulties involved in consolidating and interpreting results from various tasks. To further improve visual grounding, we propose spatial vision aggregator (SVA), a dynamic and spatially-aware connector that integrates vision features with LLMs while reducing the number of tokens. Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of distribution balancing. Collectively, Cambrian-1 not only achieves state-of-the-art performances but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs. We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes. We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning.

Live content is unavailable. Log in and register to view live content