Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection

by   Bingzhe Wu, et al.
Ant Financial
Peking University

In this paper, we aim to understand the generalization properties of generative adversarial networks (GANs) from a new perspective of privacy protection. Theoretically, we prove that a differentially private learning algorithm used for training the GAN does not overfit to a certain degree, i.e., the generalization gap can be bounded. Moreover, some recent works, such as the Bayesian GAN, can be re-interpreted based on our theoretical insight from privacy protection. Quantitatively, to evaluate the information leakage of well-trained GAN models, we perform various membership attacks on these models. The results show that previous Lipschitz regularization techniques are effective in not only reducing the generalization gap but also alleviating the information leakage of the training dataset.


On the Privacy Properties of GAN-generated Samples

The privacy implications of generative adversarial networks (GANs) are a...

A Classification-Based Perspective on GAN Distributions

A fundamental, and still largely unanswered, question in the context of ...

Characterizing Membership Privacy in Stochastic Gradient Langevin Dynamics

Bayesian deep learning is recently regarded as an intrinsic way to chara...

Data Synthesis based on Generative Adversarial Networks

Privacy is an important concern for our society where sharing data with ...

Generative Models with Information-Theoretic Protection Against Membership Inference Attacks

Deep generative models, such as Generative Adversarial Networks (GANs), ...

Generative Adversarial Privacy

We present a data-driven framework called generative adversarial privacy...

Please sign up or login with your details

Forgot password? Click here to reset