A Survey of Graph Prompting Methods: Techniques, Applications, and Challenges
While deep learning has achieved great success on various tasks, the task-specific model training notoriously relies on a large volume of labeled data. Recently, a new training paradigm of “pre-train, prompt, predict” has been proposed to improve model generalization ability with limited labeled data. The main idea is that, based on a pre-trained model, the prompting function uses a template to augment input samples with indicative context and reformalizes the target task to one of the pre-training tasks. In this survey, we provide a unique review of prompting methods from the graph perspective. Graph data has served as structured knowledge repositories in various systems by explicitly modeling the interaction between entities. Compared with traditional methods, graph prompting functions could induce task-related context and apply templates with structured knowledge. The pre-trained model is then adaptively generalized for future samples. In particular, we introduce the basic concepts of graph prompt learning, organize the existing work of designing graph prompting functions, and describe their applications and challenges to a variety of machine learning problems. This survey attempts to bridge the gap between structured graphs and prompt design to facilitate future methodology development.
READ FULL TEXT