A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges

05/11/2023
by   Usman Gohar, et al.
0

The widespread adoption of Machine Learning systems, especially in more decision-critical applications such as criminal sentencing and bank loans, has led to increased concerns about fairness implications. Algorithms and metrics have been developed to mitigate and measure these discriminations. More recently, works have identified a more challenging form of bias called intersectional bias, which encompasses multiple sensitive attributes, such as race and gender, together. In this survey, we review the state-of-the-art in intersectional fairness. We present a taxonomy for intersectional notions of fairness and mitigation. Finally, we identify the key challenges and provide researchers with guidelines for future directions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset