Reversible GANs for Memory-efficient Image-to-Image Translation

02/07/2019
by   Tycho F. A. van der Ouderaa, et al.
University of Amsterdam
0

The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.

READ FULL TEXT

page 5

page 6

page 7

page 9

page 10

page 11

05/08/2019

Generative Reversible Data Hiding by Image to Image Translation via GANs

The traditional reversible data hiding technique is based on cover image...
07/06/2020

MCMI: Multi-Cycle Image Translation with Mutual Information Constraints

We present a mutual information-based framework for unsupervised image-t...
04/14/2019

Biphasic Learning of GANs for High-Resolution Image-to-Image Translation

Despite that the performance of image-to-image translation has been sign...
05/28/2021

MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation

While attention-based transformer networks achieve unparalleled success ...
01/24/2020

Kernel of CycleGAN as a Principle homogeneous space

Unpaired image-to-image translation has attracted significant interest d...
07/19/2019

Predicting Visual Memory Schemas with Variational Autoencoders

Visual memory schema (VMS) maps show which regions of an image cause tha...
04/02/2021

The Spatially-Correlative Loss for Various Image Translation Tasks

We propose a novel spatially-correlative loss that is simple, efficient ...

Please sign up or login with your details

Forgot password? Click here to reset