DeepAI AI Chat
Log In Sign Up

Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions

by   Leslie O'Bray, et al.

Graph generative models are a highly active branch of machine learning. Given the steady development of new models of ever-increasing complexity, it is necessary to provide a principled way to evaluate and compare them. In this paper, we enumerate the desirable criteria for comparison metrics, discuss the development of such metrics, and provide a comparison of their respective expressive power. We perform a systematic evaluation of the main metrics in use today, highlighting some of the challenges and pitfalls researchers inadvertently can run into. We then describe a collection of suitable metrics, give recommendations as to their practical suitability, and analyse their behaviour on synthetically generated perturbed graphs as well as on recently proposed graph generative models.


page 9

page 33

page 34


A Note on the Inception Score

Deep generative models are powerful tools that have produced impressive ...

Evaluation metrics for behaviour modeling

A primary difficulty with unsupervised discovery of structure in large d...

Operationalizing Specifications, In Addition to Test Sets for Evaluating Constrained Generative Models

In this work, we present some recommendations on the evaluation of state...

Evaluating Graph Generative Models with Contrastively Learned Features

A wide range of models have been proposed for Graph Generative Models, n...

Curvature Filtrations for Graph Generative Model Evaluation

Graph generative model evaluation necessitates understanding differences...

On Evaluation Metrics for Graph Generative Models

In image generation, generative models can be evaluated naturally by vis...

A framework to compare music generative models using automatic evaluation metrics extended to rhythm

To train a machine learning model is necessary to take numerous decision...