Evaluating Generative Models Using Divergence Frontiers

by   Josip Djolonga, et al.

Despite the tremendous progress in the estimation of generative models, the development of tools for diagnosing their failures and assessing their performance has advanced at a much slower pace. Recent developments have investigated metrics that quantify which parts of the true distribution are modeled well, and, on the contrary, what the model fails to capture, akin to precision and recall in information retrieval. In this paper, we present a general evaluation framework for generative models that measures the trade-off between precision and recall using Rényi divergences. Our framework provides a novel perspective on existing techniques and extends them to more general domains. As a key advantage, it allows for efficient algorithms that are directly applicable to continuous distributions directly without discretization. We further showcase the proposed techniques on a set of image synthesis models.


page 1

page 2

page 3

page 4


Training Normalizing Flows with the Precision-Recall Divergence

Generative models can have distinct mode of failures like mode dropping ...

Precision-Recall Divergence Optimization for Generative Modeling with GANs and Normalizing Flows

Achieving a balance between image quality (precision) and diversity (rec...

How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models

Devising domain- and model-agnostic evaluation metrics for generative mo...

Probabilistic Precision and Recall Towards Reliable Evaluation of Generative Models

Assessing the fidelity and diversity of the generative model is a diffic...

Assessing Generative Models via Precision and Recall

Recent advances in generative modeling have led to an increased interest...

Pros and Cons of GAN Evaluation Measures: New Developments

This work is an update of a previous paper on the same topic published a...

Please sign up or login with your details

Forgot password? Click here to reset