GANGs: Generative Adversarial Network Games

12/02/2017
by   Frans A. Oliehoek, et al.
0

Generative Adversarial Networks (GAN) have become one of the most successful frameworks for unsupervised generative modeling. As GANs are difficult to train much research has focused on this. However, very little of this research has directly exploited game-theoretic techniques. We introduce Generative Adversarial Network Games (GANGs), which explicitly model a finite zero-sum game between a generator (G) and classifier (C) that use mixed strategies. The size of these games precludes exact solution methods, therefore we define resource-bounded best responses (RBBRs), and a resource-bounded Nash Equilibrium (RB-NE) as a pair of mixed strategies such that neither G or C can find a better RBBR. The RB-NE solution concept is richer than the notion of `local Nash equilibria' in that it captures not only failures of escaping local optima of gradient descent, but applies to any approximate best response computations, including methods with random restarts. To validate our approach, we solve GANGs with the Parallel Nash Memory algorithm, which provably monotonically converges to an RB-NE. We compare our results to standard GAN setups, and demonstrate that our method deals well with typical GAN problems such as mode collapse, partial mode coverage and forgetting.

READ FULL TEXT

page 5

page 6

page 7

research
06/18/2018

Beyond Local Nash Equilibria for Adversarial Networks

Save for some special cases, current training methods for Generative Adv...
research
02/21/2020

GANs May Have No Nash Equilibria

Generative adversarial networks (GANs) represent a zero-sum game between...
research
02/17/2021

DO-GAN: A Double Oracle Framework for Generative Adversarial Networks

In this paper, we propose a new approach to train Generative Adversarial...
research
09/23/2021

Learning Generative Deception Strategies in Combinatorial Masking Games

Deception is a crucial tool in the cyberdefence repertoire, enabling def...
research
02/02/2019

Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal

Minmax optimization, especially in its general nonconvex-nonconcave form...
research
10/23/2018

Finding Mixed Nash Equilibria of Generative Adversarial Networks

We reconsider the training objective of Generative Adversarial Networks ...
research
02/13/2023

Generative Adversarial Equilibrium Solvers

We introduce the use of generative adversarial learning to compute equil...

Please sign up or login with your details

Forgot password? Click here to reset