Instruction-based Hypergraph Pretraining

Abstract

Pretraining has been widely explored to augment the adaptability of graph learning models to transfer knowledge from large datasets to a downstream task, such as link prediction or classification. However, the gap between training objectives and the discrepancy between data distributions in pretraining and downstream tasks hinders the transfer of the pre-trained knowledge. Inspired by instruction-based prompts widely used in pre-trained language models, we introduce instructions into graph pertaining. In this paper, we propose a novel pretraining framework named Instruction-based Hypergraph Pretraining. To overcome the discrepancy between pretraining and downstream tasks, text-based instructions provide explicit guidance on specific tasks for representation learning. Compared to learnable prompts, whose effectiveness depends on the quality and diversity of training data, text-based instructions intrinsically encapsulate task information and support the model’s generalization beyond the structure seen during pretraining. To capture high-order relations with task information in a context-aware manner, a novel prompting hypergraph convolution layer is devised to integrate instructions into information propagation in hypergraphs. Extensive experiments conducted on three public datasets verify the superiority of IHP in various scenarios.

Type
Publication
Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval
Liangwei Yang
Liangwei Yang
Research Scientist at Salesforce Research

My research interests include graph, recommender system and efficient modeling.