Skip to yearly menu bar Skip to main content


Poster

Déjà Vu Memorization in Vision–Language Models

Bargav Jayaraman · Chuan Guo · Kamalika Chaudhuri

East Exhibit Hall A-C #4806
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Vision-Language Models (VLMs) have emerged as the state-of-the-art representation learning solution, with myriads of downstream applications such as image classification, retrieval and generation. A natural question is whether these models memorize their training data, which also has implications for generalization. We propose a new method for measuring memorization in VLMs, which we call dèjá vu memorization. For VLMs trained on image-caption pairs, we show that the model indeed retains information about individual objects in the training images beyond what can be inferred from correlations or the image caption. We evaluate dèjá vu memorization at both sample and population level, and show that it is significant for OpenCLIP trained on as many as 50M image-caption pairs. Finally, we show that text randomization considerably mitigates memorization risk while only moderately impacting the model’s downstream task performance. The code is available here: https://github.com/facebookresearch/VLMDejaVu.

Live content is unavailable. Log in and register to view live content