Back to Search Start Over

GraphGPT: Graph Instruction Tuning for Large Language Models

Authors :
Tang, Jiabin
Yang, Yuhao
Wei, Wei
Shi, Lei
Su, Lixin
Cheng, Suqi
Yin, Dawei
Huang, Chao
Publication Year :
2023

Abstract

Graph Neural Networks (GNNs) have evolved to understand graph structures through recursive exchanges and aggregations among nodes. To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation. Traditional methods often depend on fine-tuning with task-specific labels, limiting their effectiveness when labeled data is scarce. Our research tackles this by advancing graph model generalization in zero-shot learning environments. Inspired by the success of large language models (LLMs), we aim to create a graph-oriented LLM capable of exceptional generalization across various datasets and tasks without relying on downstream graph data. We introduce the GraphGPT framework, which integrates LLMs with graph structural knowledge through graph instruction tuning. This framework includes a text-graph grounding component to link textual and graph structures and a dual-stage instruction tuning approach with a lightweight graph-text alignment projector. These innovations allow LLMs to comprehend complex graph structures and enhance adaptability across diverse datasets and tasks. Our framework demonstrates superior generalization in both supervised and zero-shot graph learning tasks, surpassing existing benchmarks. The open-sourced model implementation of our GraphGPT is available at https://github.com/HKUDS/GraphGPT.<br />Comment: Accepted by SIGIR'2024, full paper

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.13023
Document Type :
Working Paper