CoulGAT: An Experiment on Interpretability of Graph Attention Networks

12/18/2019
by   Burc Gokden, et al.
0

We present an attention mechanism inspired from definition of screened Coulomb potential. This attention mechanism was used to interpret the Graph Attention (GAT) model layers and training dataset by using a flexible and scalable framework (CoulGAT) developed for this purpose. Using CoulGAT, a forest of plain and resnet models were trained and characterized using this attention mechanism against CHAMPS dataset. The learnable variables of the attention mechanism are used to extract node-node and node-feature interactions to define an empirical standard model for the graph structure and hidden layer. This representation of graph and hidden layers can be used as a tool to compare different models, optimize hidden layers and extract a compact definition of graph structure of the dataset.

READ FULL TEXT

page 16

page 19

research
04/21/2023

Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers

We introduce a novel self-attention mechanism, which we call CSA (Chroma...
research
10/20/2022

Causally-guided Regularization of Graph Attention Improves Generalizability

However, the inferred attentions are vulnerable to spurious correlations...
research
05/25/2023

Optimization and Interpretability of Graph Attention Networks for Small Sparse Graph Structures in Automotive Applications

For automotive applications, the Graph Attention Network (GAT) is a prom...
research
01/10/2022

Neuroplastic graph attention networks for nuclei segmentation in histopathology images

Modern histopathological image analysis relies on the segmentation of ce...
research
09/07/2018

Adaptive Edge Features Guided Graph Attention Networks

Edge features contain important information about graphs. However, curre...
research
07/06/2022

Graph Trees with Attention

When dealing with tabular data, models based on regression and decision ...
research
01/31/2022

Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism

Interpretable graph learning is in need as many scientific applications ...

Please sign up or login with your details

Forgot password? Click here to reset