Training face verification models from generated face identity data

by   Dennis Conway, et al.

Machine learning tools are becoming increasingly powerful and widely used. Unfortunately membership attacks, which seek to uncover information from data sets used in machine learning, have the potential to limit data sharing. In this paper we consider an approach to increase the privacy protection of data sets, as applied to face recognition. Using an auxiliary face recognition model, we build on the StyleGAN generative adversarial network and feed it with latent codes combining two distinct sub-codes, one encoding visual identity factors, and, the other, non-identity factors. By independently varying these vectors during image generation, we create a synthetic data set of fictitious face identities. We use this data set to train a face recognition model. The model performance degrades in comparison to the state-of-the-art of face verification. When tested with a simple membership attack our model provides good privacy protection, however the model performance degrades in comparison to the state-of-the-art of face verification. We find that the addition of a small amount of private data greatly improves the performance of our model, which highlights the limitations of using synthetic data to train machine learning models.


page 6

page 7


Identity-driven Three-Player Generative Adversarial Network for Synthetic-based Face Recognition

Many of the commonly used datasets for face recognition development are ...

SFace: Privacy-friendly and Accurate Face Recognition using Synthetic Data

Recent deep face recognition models proposed in the literature utilized ...

SASMU: boost the performance of generalized recognition model using synthetic face dataset

Nowadays, deploying a robust face recognition product becomes easy with ...

Face Recognition Using Synthetic Face Data

In the field of deep learning applied to face recognition, securing larg...

Generating Photo-Realistic Training Data to Improve Face Recognition Accuracy

In this paper we investigate the feasibility of using synthetic data to ...

OrthoMAD: Morphing Attack Detection Through Orthogonal Identity Disentanglement

Morphing attacks are one of the many threats that are constantly affecti...

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

With the ever-growing complexity of deep learning models for face recogn...

Please sign up or login with your details

Forgot password? Click here to reset