Poster
in
Workshop: New Frontiers in Graph Learning (GLFrontiers)
Exploring the Potential of Large Language Models (LLMs) in Learning on Graph
Zhikai Chen · Haitao Mao · Hang Li · Wei Jin · Hongzhi Wen · Xiaochi Wei · Shuaiqiang Wang · Dawei Yin · Wenqi Fan · Hui Liu · Jiliang Tang
Keywords: [ graph neural networks ] [ Large language models ] [ Node Classification ]
Learning on Graphs has attracted immense attention due to its wide real-world applications. The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding. In recent years, Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities that have revolutionized existing workflows to handle text data. In this paper, we aim to explore the potential of LLMs in graph machine learning, especially the node classification task, and investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors. The former leverages LLMs to enhance nodes' text attributes with their massive knowledge and then generate predictions through GNNs. The latter attempts to directly employ LLMs as standalone predictors. We conduct comprehensive and systematical studies on these two pipelines under various settings. From comprehensive empirical results, we make original observations and find new insights that open new possibilities and suggest promising directions to leverage LLMs for learning on graphs.