ωGNNs: Deep Graph Neural Networks Enhanced by Multiple Propagation Operators

10/31/2022
by   Moshe Eliasof, et al.
0

Graph Neural Networks (GNNs) are limited in their propagation operators. These operators often contain non-negative elements only and are shared across channels and layers, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge this gap by incorporating trainable channel-wise weighting factors ω to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called ωGNN, and we study two variants: ωGCN and ωGAT. For ωGCN, we theoretically analyse its behaviour and the impact of ω on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our ωGCN and ωGAT perform better or on par with state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset