ExFaceGAN: Exploring Identity Directions in GAN's Learned Latent Space for Synthetic Identity Generation
Deep generative models have recently presented impressive results in generating realistic face images of random synthetic identities. To generate multiple samples of a certain synthetic identity, several previous works proposed to disentangle the latent space of GANs by incorporating additional supervision or regularization, enabling the manipulation of certain attributes, e.g. identity, hairstyle, pose, or expression. Most of these works require designing special loss functions and training dedicated network architectures. Others proposed to disentangle specific factors in unconditional pretrained GANs latent spaces to control their output, which also requires supervision by attribute classifiers. Moreover, these attributes are entangled in GAN's latent space, making it difficult to manipulate them without affecting the identity information. We propose in this work a framework, ExFaceGAN, to disentangle identity information in state-of-the-art pretrained GANs latent spaces, enabling the generation of multiple samples of any synthetic identity. The variations in our generated images are not limited to specific attributes as ExFaceGAN explicitly aims at disentangling identity information, while other visual attributes are randomly drawn from a learned GAN latent space. As an example of the practical benefit of our ExFaceGAN, we empirically prove that data generated by ExFaceGAN can be successfully used to train face recognition models.
READ FULL TEXT