Skip to yearly menu bar Skip to main content


Poster

GOUDA: A General Graph Contrastive Learning Framework via Augmentation Unification

Jiaming Zhuo · Yintong Lu · Hui Ning · Kun Fu · bingxin niu · Dongxiao He · Chuan Wang · Yuanfang Guo · Zhen Wang · Xiaochun Cao · Liang Yang

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In real-world scenarios, networks (graphs) and their tasks possess unique characteristics, requiring the development of a versatile graph augmentation (GA) to meet the varied demands of network analysis. Unfortunately, the majority of Graph Contrastive Learning (GCL) frameworks are hampered by the specificity, complexity, and incompleteness of their GA techniques. Firstly, GAs designed for specific scenarios may compromise the universality of models if mishandled. Secondly, the process of identifying and generating optimal augmentations generally involves substantial computational overhead. Finally, the effectiveness of the GCL, even the learnable ones, is constrained by the finite selection of GAs available. To overcome these limitations, this paper introduces a novel unified GA module, named UGA, after elucidating the local invariance for GCLs from the message-passing perspective of the graph encoder. In theory, this module has the capacity to generalize any explicit GAs (i.e., node, edge, attribute, and subgraph augmentations). Based on the proposed UGA, a novel general GCL framework, dubbed Graph cOntrastive UnifieD Augmentations (GOUDA), is proposed. It seamlessly integrates widely adopted contrastive losses and an introduced independence loss to fulfill the common requirements of consistency and diversity of augmentation across diverse scenarios. Evaluations across various datasets and tasks demonstrate the generality and efficiency of the proposed GOUDA over existing state-of-the-art GCLs.

Live content is unavailable. Log in and register to view live content