EAGAN: Efficient Two-stage Evolutionary Architecture Search for GANs

11/30/2021
by   Guohao Ying, et al.
0

Generative Adversarial Networks (GANs) have been proven hugely successful in image generation tasks, but GAN training has the problem of instability. Many works have improved the stability of GAN training by manually modifying the GAN architecture, which requires human expertise and extensive trial-and-error. Thus, neural architecture search (NAS), which aims to automate the model design, has been applied to search GANs on the task of unconditional image generation. The early NAS-GAN works only search generators for reducing the difficulty. Some recent works have attempted to search both generator (G) and discriminator (D) to improve GAN performance, but they still suffer from the instability of GAN training during the search. To alleviate the instability issue, we propose an efficient two-stage evolutionary algorithm (EA) based NAS framework to discover GANs, dubbed EAGAN. Specifically, we decouple the search of G and D into two stages and propose the weight-resetting strategy to improve the stability of GAN training. Besides, we perform evolution operations to produce the Pareto-front architectures based on multiple objectives, resulting in a superior combination of G and D. By leveraging the weight-sharing strategy and low-fidelity evaluation, EAGAN can significantly shorten the search time. EAGAN achieves highly competitive results on the CIFAR-10 (IS=8.81±0.10, FID=9.91) and surpasses previous NAS-searched GANs on the STL-10 dataset (IS=10.44±0.087, FID=22.18).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2019

AGAN: Towards Automated Design of Generative Adversarial Networks

Recent progress in Generative Adversarial Networks (GANs) has shown prom...
research
12/02/2019

DEGAS: Differentiable Efficient Generator Search

Network architecture search (NAS) achieves state-of-the-art results in v...
research
08/19/2023

EGANS: Evolutionary Generative Adversarial Network Search for Zero-Shot Learning

Zero-shot learning (ZSL) aims to recognize the novel classes which canno...
research
03/19/2020

GAN Compression: Efficient Architectures for Interactive Conditional GANs

Conditional Generative Adversarial Networks (cGANs) have enabled control...
research
04/23/2020

Efficient Neural Architecture for Text-to-Image Synthesis

Text-to-image synthesis is the task of generating images from text descr...
research
02/22/2021

Sandwich Batch Normalization

We present Sandwich Batch Normalization (SaBN), an embarrassingly easy i...
research
07/17/2020

Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search

In this paper, we introduce a new reinforcement learning (RL) based neur...

Please sign up or login with your details

Forgot password? Click here to reset