Towards Explaining Demographic Bias through the Eyes of Face Recognition Models

08/29/2022
by   Biying Fu, et al.
0

Biases inherent in both data and algorithms make the fairness of widespread machine learning (ML)-based decision-making systems less than optimal. To improve the trustfulness of such ML decision systems, it is crucial to be aware of the inherent biases in these solutions and to make them more transparent to the public and developers. In this work, we aim at providing a set of explainability tool that analyse the difference in the face recognition models' behaviors when processing different demographic groups. We do that by leveraging higher-order statistical information based on activation maps to build explainability tools that link the FR models' behavior differences to certain facial regions. The experimental results on two datasets and two face recognition models pointed out certain areas of the face where the FR models react differently for certain demographic groups compared to reference groups. The outcome of these analyses interestingly aligns well with the results of studies that analyzed the anthropometric differences and the human judgment differences on the faces of different demographic groups. This is thus the first study that specifically tries to explain the biased behavior of FR models on different demographic groups and link it directly to the spatial facial features. The code is publicly available here.

READ FULL TEXT

page 6

page 7

research
06/13/2020

Mitigating Face Recognition Bias via Group Adaptive Classifier

Face recognition is known to exhibit bias - subjects in certain demograp...
research
04/26/2023

Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection

Face recognition (FR) systems continue to spread in our daily lives with...
research
06/01/2023

Being Right for Whose Right Reasons?

Explainability methods are used to benchmark the extent to which model p...
research
09/30/2022

The More Secure, The Less Equally Usable: Gender and Ethnicity (Un)fairness of Deep Face Recognition along Security Thresholds

Face biometrics are playing a key role in making modern smart city appli...
research
08/22/2023

(Un)fair Exposure in Deep Face Rankings at a Distance

Law enforcement regularly faces the challenge of ranking suspects from t...
research
02/10/2020

Post-Comparison Mitigation of Demographic Bias in Face Recognition Using Fair Score Normalization

Current face recognition systems achieved high progress on several bench...

Please sign up or login with your details

Forgot password? Click here to reset