Convergence and Sample Complexity of SGD in GANs

12/01/2020
by   Vasilis Kontonis, et al.
0

We provide theoretical convergence guarantees on training Generative Adversarial Networks (GANs) via SGD. We consider learning a target distribution modeled by a 1-layer Generator network with a non-linear activation function ϕ(·) parametrized by a d × d weight matrix 𝐖_*, i.e., f_*(𝐱) = ϕ(𝐖_* 𝐱). Our main result is that by training the Generator together with a Discriminator according to the Stochastic Gradient Descent-Ascent iteration proposed by Goodfellow et al. yields a Generator distribution that approaches the target distribution of f_*. Specifically, we can learn the target distribution within total-variation distance ϵ using Õ(d^2/ϵ^2) samples which is (near-)information theoretically optimal. Our results apply to a broad class of non-linear activation functions ϕ, including ReLUs and is enabled by a connection with truncated statistics and an appropriate design of the Discriminator network. Our approach relies on a bilevel optimization framework to show that vanilla SGDA works.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro