Poster
in
Workshop: Workshop on Responsibly Building Next Generation of Multimodal Foundation Models
The Multi-faceted Monosemanticity in Multimodal Representations
Hanqi Yan · Yulan He · Yifei Wang
Keywords: [ Mechanistic Interpretability ] [ Multi-modal ] [ Multi-modal Representation ]
In this paper, we leverage recent advancements in feature monosemanticity to extract interpretable features from deep multi-modal models, offering a data-driven understanding of modality gaps. Specifically, we investigate CLIP (Contrastive Language-Image Pretraining), a prominent visual-language representation model trained on extensive image-text pairs. Building upon interpretability tools developed for single-modal models, we adapt these methodologies to assess the multi-modal interpretability of CLIP's features. Additionally, we introduce the Modality Dominance Score (MDS) to attribute the interpretability of each feature to its respective modality. Utilizing these multi-modal interpretability tools, we transform CLIP's features into a more interpretable space, enabling us to categorize them into three distinct classes: vision features, language features (both single-modal), and visual-language features (cross-modal). Our findings reveal that this categorization aligns closely with human cognitive understandings of different modalities. These results indicate that large-scale multi-modal models, when equipped with advanced interpretability tools, offer valuable insights into the key connections and distinctions between different data modalities. This work not only bridges the gap between cognitive science and machine learning but also introduces new data-driven tools to advancing both fields.