Skip to yearly menu bar Skip to main content


Poster

GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph

Junhan Yang · Zheng Liu · Shitao Xiao · Chaozhuo Li · Defu Lian · Sanjay Agrawal · Amit Singh · Guangzhong Sun · Xing Xie

Keywords: [ Representation Learning ] [ Graph Learning ] [ Deep Learning ] [ Transformers ]


Abstract:

The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information. Recent breakthroughs on pretrained language models and graph neural networks push forward the development of corresponding techniques. The existing works mainly rely on the cascaded model architecture: the textual features of nodes are independently encoded by language models at first; the textual embeddings are aggregated by graph neural networks afterwards. However, the above architecture is limited due to the independent modeling of textual features. In this work, we propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models. With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow, making each node's semantic accurately comprehended from the global perspective. In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph. Extensive evaluations are conducted on three large-scale benchmark datasets, where GraphFormers outperform the SOTA baselines with comparable running efficiency. The source code is released at https://github.com/microsoft/GraphFormers .

Chat is not available.