Skip to yearly menu bar Skip to main content


Poster

Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning

Divyam Madaan · Taro Makino · Sumit Chopra · Kyunghyun Cho

Poster Room - TBD
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Supervised multi-modal learning involves mapping multiple modalities to a target label. Previous studies in this field have concentrated on capturing in isolation either the inter-modality dependencies (the relationships between different modalities and the label) or the intra-modality dependencies (the relationships within a single modality and the label). We argue that these conventional approaches that rely solely on either inter- or intra-modality dependencies may not be optimal in general. We view the multi-modal learning problem from the lens of generative models where we consider the target as a source of multiple modalities and the interaction between them. Towards that end, we propose inter- \& intra-modality modeling (I2M2) framework, which captures and integrates both the inter- and intra-modality dependencies, leading to more accurate predictions. We evaluate our approach using real-world healthcare and vision-and-language datasets with state-of-the-art models, demonstrating superior performance over traditional methods focusing only on one type of modality dependency.

Live content is unavailable. Log in and register to view live content