Poster
Tackling Uncertain Correspondences for Multi-Modal Entity Alignment
Liyi Chen · Ying Sun · Shengzhe Zhang · Yuyang Ye · Wei Wu · Hui Xiong
Poster Room - TBD
Multi-modal entity alignment is crucial for integrating multi-modal knowledge graphs originating from different data sources.Existing works mainly focus on fully depicting entity features by designing various modality encoders or fusion approaches.However, uncertain correspondences between inter-modal or intra-modal cues, such as weak inter-modal associations, description diversity, and modality absence, still hinder the effective exploration of aligned entity similarities.To this end, in this paper, we propose a novel Tackling uncertain correspondences method for Multi-modal Entity Alignment (TMEA).Specifically, to handle diverse attribute knowledge descriptions, we design alignment-augmented abstract representation that incorporates the large language model and in-context learning into attribute alignment and filtering for generating and embedding the attribute abstract.In order to mitigate the influence of the modality absence, we propose to unify all modality features into a shared latent subspace and generate pseudo features via variational autoencoders according to existing modal features.Then, we develop an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints, to address weak semantic associations between modalities.Extensive experiments on two real-world datasets validate the effectiveness of TMEA with a clear improvement over competitive baselines.
Live content is unavailable. Log in and register to view live content