Learning Graph Neural Networks with Approximate Gradient Descent

12/07/2020
by   Qunwei Li, et al.
0

The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden layer for node information convolution is provided in this paper. Two types of GNNs are investigated, depending on whether labels are attached to nodes or graphs. A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed. The algorithm proposed is applicable to a wide range of activation functions including ReLU, Leaky ReLU, Sigmod, Softplus and Swish. It is shown that the proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs. For both types of GNNs, sample complexity in terms of the number of nodes or the number of graphs is characterized. The impact of feature dimension and GNN structure on the convergence rate is also theoretically characterized. Numerical experiments are further provided to validate our theoretical analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2020

Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

Although graph neural networks (GNNs) have made great progress recently ...
research
02/06/2023

Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks

Due to the significant computational challenge of training large-scale g...
research
06/24/2023

Graph Neural Networks Provably Benefit from Structural Information: A Feature Learning Perspective

Graph neural networks (GNNs) have pioneered advancements in graph repres...
research
09/24/2020

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

We study how neural networks trained by gradient descent extrapolate, i....
research
03/08/2023

The Descriptive Complexity of Graph Neural Networks

We analyse the power of graph neural networks (GNNs) in terms of Boolean...
research
09/21/2021

Graph Neural Networks for Graph Drawing

Graph Drawing techniques have been developed in the last few years with ...
research
06/24/2020

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become a...

Please sign up or login with your details

Forgot password? Click here to reset