OrthoGAN: Multifaceted Semantics for Disentangled Face Editing
This paper describes a new technique for finding disentangled semantic directions in the latent space of StyleGAN. OrthoGAN identifies meaningful orthogonal subspaces that allow editing of one human face attribute, while minimizing undesired changes in other attributes. Our model is capable of editing a single attribute in multiple directions. Resulting in a range of possible generated images. We compare our scheme with three state-of-the-art models and show that our method outperforms them in terms of face editing and disentanglement capabilities. Additionally, we suggest quantitative measures for evaluating attribute separation and disentanglement, and exhibit the superiority of our model with respect to those measures.
READ FULL TEXT