A Simple Proof of the Universality of Invariant/Equivariant Graph Neural Networks

10/09/2019
by   Takanori Maehara, et al.
0

We present a simple proof for the universality of invariant and equivariant tensorized graph neural networks. Our approach considers a restricted intermediate hypothetical model named Graph Homomorphism Model to reach the universality conclusions including an open case for higher-order output. We find that our proposed technique not only leads to simple proofs of the universality properties but also gives a natural explanation for the tensorization of the previously studied models. Finally, we give some remarks on the connection between our model and the continuous representation of graphs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2021

E(n) Equivariant Graph Neural Networks

This paper introduces a new model to learn graph neural networks equivar...
research
06/06/2021

Graph2Graph Learning with Conditional Autoregressive Models

We present a graph neural network model for solving graph-to-graph learn...
research
05/16/2023

Unwrapping All ReLU Networks

Deep ReLU Networks can be decomposed into a collection of linear models,...
research
10/07/2020

Simplicial Neural Networks

We present simplicial neural networks (SNNs), a generalization of graph ...
research
11/08/2022

Perspectives on neural proof nets

In this paper I will present a novel way of combining proof net proof se...
research
05/13/2019

Universal Invariant and Equivariant Graph Neural Networks

Graph Neural Networks (GNN) come in many flavors, but should always be e...
research
06/22/2020

Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments

Most reinforcement learning approaches used in behavior generation utili...

Please sign up or login with your details

Forgot password? Click here to reset