Poster
in
Workshop: UniReps: Unifying Representations in Neural Models
An Information Criterion for Controlled Disentanglement of Multimodal Data
Chenyu Wang · Sharut Gupta · Xinyi Zhang · Sana Tonekaboni · Stefanie Jegelka · Tommi Jaakkola · Caroline Uhler
Keywords: [ Self-Supervised Learning ] [ Disentanglement ] [ Information Theory ] [ Multimodal Representation Learning ]
[
Abstract
]
[ Project Page ]
presentation:
UniReps: Unifying Representations in Neural Models
Sat 14 Dec 8:15 a.m. PST — 5:30 p.m. PST
[
OpenReview]
Sat 14 Dec 8:15 a.m. PST — 5:30 p.m. PST
Abstract:
Multimodal representation learning seeks to relate and decompose information available in multiple modalities. By disentangling modality-specific information from information that is shared across modalities, we can improve interpretability and robustness and enable tasks like counterfactual generation. However, separating these components is challenging due to their deep entanglement in real-world data. We propose $\textbf{Disentangled}$ $\textbf{S}$elf-$\textbf{S}$upervised $\textbf{L}$earning (DisentangledSSL), a novel self-supervised approach that effectively learns disentangled representations, even when the so-called $\textit{Minimum Necessary Information}$ (MNI) point is not achievable. It outperforms baselines on multiple synthetic and real-world datasets, excelling in downstream tasks, including prediction tasks for vision-language data, and molecule-phenotype retrieval for biological data.
Chat is not available.