Witnessing Adversarial Training in Reproducing Kernel Hilbert Spaces

01/26/2019
by   Arash Mehrjou, et al.
20

Modern implicit generative models such as generative adversarial networks (GANs) are generally known to suffer from instability and lack of interpretability as it is difficult to diagnose what aspects of the target distribution are missed by the generative model. In this work, we propose a theoretically grounded solution to these issues by augmenting the GAN's loss function with a kernel-based regularization term that magnifies local discrepancy between the distributions of generated and real samples. The proposed method relies on so-called witness points in the data space which are jointly trained with the generator and provide an interpretable indication of where the two distributions locally differ during the training procedure. In addition, the proposed algorithm is scaled to higher dimensions by learning the witness locations in a latent space of an autoencoder. We theoretically investigate the dynamics of the training procedure, prove that a desirable equilibrium point exists, and the dynamical system is locally stable around this equilibrium. Finally, we demonstrate different aspects of the proposed algorithm by numerical simulations of analytical solutions and empirical results for low and high-dimensional datasets.

READ FULL TEXT

page 7

page 8

page 19

page 20

research
10/29/2019

Kernel-Guided Training of Implicit Generative Models with Stability Guarantees

Modern implicit generative models such as generative adversarial network...
research
05/14/2023

Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a popular formulation to trai...
research
10/28/2020

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...
research
09/13/2021

Inferential Wasserstein Generative Adversarial Networks

Generative Adversarial Networks (GANs) have been impactful on many probl...
research
05/10/2021

Learning High-Dimensional Distributions with Latent Neural Fokker-Planck Kernels

Learning high-dimensional distributions is an important yet challenging ...
research
07/07/2022

Neural Stein critics with staged L^2-regularization

Learning to differentiate model distributions from observed data is a fu...
research
08/21/2022

Instability and Local Minima in GAN Training with Kernel Discriminators

Generative Adversarial Networks (GANs) are a widely-used tool for genera...

Please sign up or login with your details

Forgot password? Click here to reset