Poster
Toward Robust Incomplete Multimodal Sentiment Analysis via Representation Factorization and Alignment
Mingcheng Li · Dingkang Yang · Yang Liu · Shunli Wang · Jiawei Chen · Shuaibing Wang · Jinjie Wei · Yue Jiang · Qingyao Xu · Xiaolu Hou · Mingyang Sun · Ziyun Qian · Dongliang Kou · Lihua Zhang
Multimodal Sentiment Analysis (MSA) is an important research area that aims to understand and recognize human sentiment through multiple modalities. The complementary information provided by multimodal fusion promotes better sentiment analysis compared to utilizing only unimodal modalities. Nevertheless, in real-world applications, many unavoidable factors may lead to situations of uncertain modality missing, thus hindering the effectiveness of multimodal modeling and degrading the model's performance. To this end, we propose a Representation Factorization and Alignment (ReFA) framework for the MSA task under uncertain missing modalities. Specifically, we propose a fine-grained representation factorization module that sufficiently extracts valuable sentiment information by factorizing modality into sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction. Moreover, a hierarchical mutual information maximization mechanism is introduced to incrementally maximize the mutual information between multi-scale representations to align and reconstruct the high-level semantics in the representations. Eventually, we propose a hierarchical adversarial learning mechanism that progressively aligns and adapts the latent distributions of the representations to produce robust joint multimodal representations. Comprehensive experiments on three datasets demonstrate that our framework significantly improves MSA performance under both uncertain missing-modality and complete-modality testing conditions.
Live content is unavailable. Log in and register to view live content