Graph Self-Attention for learning graph representation with Transformer

01/30/2022
by   Wonpyo Park, et al.
0

We propose a novel Graph Self-Attention module to enable Transformer models to learn graph representation. We aim to incorporate graph information, on the attention map and hidden representations of Transformer. To this end, we propose context-aware attention which considers the interactions between query, key and graph information. Moreover, we propose graph-embedded value to encode the graph information on the hidden representation. Our extensive experiments and ablation studies validate that our method successfully encodes graph representation on Transformer architecture. Finally, our method achieves state-of-the-art performance on multiple benchmarks of graph representation learning, such as graph classification on images and molecules to graph regression on quantum chemistry.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2022

Structure-Aware Transformer for Graph Representation Learning

The Transformer architecture has gained growing attention in graph repre...
research
09/07/2020

Scalar Coupling Constant Prediction Using Graph Embedding Local Attention Encoder

Scalar coupling constant (SCC) plays a key role in the analysis of three...
research
10/12/2021

Relative Molecule Self-Attention Transformer

Self-supervised learning holds promise to revolutionize molecule propert...
research
03/10/2023

GATOR: Graph-Aware Transformer with Motion-Disentangled Regression for Human Mesh Recovery from a 2D Pose

3D human mesh recovery from a 2D pose plays an important role in various...
research
09/11/2023

Circle Feature Graphormer: Can Circle Features Stimulate Graph Transformer?

In this paper, we introduce two local graph features for missing link pr...
research
11/19/2022

Rethinking Batch Sample Relationships for Data Representation: A Batch-Graph Transformer based Approach

Exploring sample relationships within each mini-batch has shown great po...
research
10/14/2020

DA-Transformer: Distance-aware Transformer

Transformer has achieved great success in the NLP field by composing var...

Please sign up or login with your details

Forgot password? Click here to reset