How to train your conditional GAN: An approach using geometrically structured latent manifolds
Conditional generative modeling typically requires capturing one-to-many mappings between the inputs and outputs. However, vanilla conditional GANs (cGAN) tend to ignore the variations of the latent seeds which results in mode-collapse. As a solution, recent works have moved towards comparatively expensive models for generating diverse outputs in a conditional setting. In this paper, we argue that the limited diversity of the vanilla cGANs is not due to a lack of capacity, but a result of non-optimal training schemes. We tackle this problem from a geometrical perspective and propose a novel training mechanism that increases both the diversity and the visual quality of the vanilla cGAN. The proposed solution does not demand architectural modifications and paves the way for more efficient architectures that target conditional generation in multi-modal spaces. We validate the efficacy of our model against a diverse set of tasks and show that the proposed solution is generic and effective across multiple datasets.
READ FULL TEXT