Reversible GANs for Memory-efficient Image-to-Image Translation

by   Tycho F. A. van der Ouderaa, et al.
University of Amsterdam

The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.


page 5

page 6

page 7

page 9

page 10

page 11


Generative Reversible Data Hiding by Image to Image Translation via GANs

The traditional reversible data hiding technique is based on cover image...

MCMI: Multi-Cycle Image Translation with Mutual Information Constraints

We present a mutual information-based framework for unsupervised image-t...

Biphasic Learning of GANs for High-Resolution Image-to-Image Translation

Despite that the performance of image-to-image translation has been sign...

MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation

While attention-based transformer networks achieve unparalleled success ...

Kernel of CycleGAN as a Principle homogeneous space

Unpaired image-to-image translation has attracted significant interest d...

Predicting Visual Memory Schemas with Variational Autoencoders

Visual memory schema (VMS) maps show which regions of an image cause tha...

The Spatially-Correlative Loss for Various Image Translation Tasks

We propose a novel spatially-correlative loss that is simple, efficient ...

Please sign up or login with your details

Forgot password? Click here to reset