Bias and Generalization in Deep Generative Models: An Empirical Study

by   Shengjia Zhao, et al.
Stanford University

In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images. Inspired by experimental methods from cognitive psychology, we probe each learning algorithm with carefully designed training datasets to characterize when and how existing models generate novel attributes and their combinations. We identify similarities to human psychology and verify that these patterns are consistent across commonly used models and architectures.


page 8

page 15


Discovering Graph Generation Algorithms

We provide a novel approach to construct generative models for graphs. I...

Multilinear Latent Conditioning for Generating Unseen Attribute Combinations

Deep generative models rely on their inductive bias to facilitate genera...

On Memorization in Probabilistic Deep Generative Models

Recent advances in deep generative models have led to impressive results...

OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression

Explicit deep generative models (DGMs), e.g., VAEs and Normalizing Flows...

Normal Similarity Network for Generative Modelling

Gaussian distributions are commonly used as a key building block in many...

Winning Lottery Tickets in Deep Generative Models

The lottery ticket hypothesis suggests that sparse, sub-networks of a gi...

Diversity vs. Recognizability: Human-like generalization in one-shot generative models

Robust generalization to new concepts has long remained a distinctive fe...

Code Repositories


The PyTorch implementation of the GLF

view repo

Please sign up or login with your details

Forgot password? Click here to reset