Nonlinear State-Space Generalizations of Graph Convolutional Neural Networks

10/27/2020
by   Luana Ruiz, et al.
0

Graph convolutional neural networks (GCNNs) learn compositional representations from network data by nesting linear graph convolutions into nonlinearities. In this work, we approach GCNNs from a state-space perspective revealing that the graph convolutional module is a minimalistic linear state-space model, in which the state update matrix is the graph shift operator. We show this state update may be problematic because it is nonparametric, and depending on the graph spectrum it may explode or vanish. Therefore, the GCNN has to trade its degrees of freedom between extracting features from data and handling these instabilities. To improve such trade-off, we propose a novel family of nodal aggregation rules that aggregates node features within a layer in a nonlinear state-space parametric fashion and allowing for a better trade-off. We develop two architectures within this family inspired by the recursive ideas with and without nodal gating mechanisms. The proposed solutions generalize the GCNN and provide an additional handle to control the state update and learn from the data. Numerical results on source localization and authorship attribution show the superiority of the nonlinear state-space generalization models over the baseline GCNN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset