Skip to yearly menu bar Skip to main content


Spotlight Poster

Multilingual Diversity Improves Vision-Language Representations

Thao Nguyen · Matthew Wallingford · Sebastin Santy · Wei-Chiu Ma · Sewoong Oh · Ludwig Schmidt · Pang Wei Koh · Ranjay Krishna

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Massive web-crawled image-text datasets lay the foundation for recent progress in multimodal learning. These datasets are designed with the goal of training a model to do well on standard computer vision benchmarks, many of which, however, have been shown to be English-centric (e.g., ImageNet). Consequently, existing data curation techniques gravitate towards using predominantly English image-text pairs and discard many potentially useful non-English samples. Our work questions this practice. Multilingual data is inherently enriching not only because it provides a gateway to learn about culturally salient concepts, but also because it depicts common concepts differently from monolingual data. We thus conduct a systematic study to explore the performance benefits of using more samples of non-English origins with respect to English vision tasks. By translating all multilingual image-text pairs from a raw web crawl to English and re-filtering them, we increase the prevalence of multilingual data in the resulting training set. Pre-training on this dataset outperforms using English-only or English-dominated datasets on ImageNet, ImageNet distribution shifts, image-English-text retrieval, GeoDE, and on average across 38 tasks from the DataComp benchmark. In addition, we quantitatively show that English and non-English data are significantly different in both image and (translated) text space. We hope that our findings motivate future work to be more intentional about including multicultural and multilingual data, not just when non-English or geographically diverse tasks are involved, but to enhance model capabilities at large.

Live content is unavailable. Log in and register to view live content