Making ML models fairer through explanations: the case of LimeOut

11/01/2020
by   Guilherme Alves, et al.
33

Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased. This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole. Not only unfair outcomes affect human rights, they also undermine public trust in ML and AI. In this paper we address fairness issues of ML models based on decision outcomes, and we show how the simple idea of "feature dropout" followed by an "ensemble approach" can improve model fairness. To illustrate, we will revisit the case of "LimeOut" that was proposed to tackle "process fairness", which measures a model's reliance on sensitive or discriminatory features. Given a classifier, a dataset and a set of sensitive features, LimeOut first assesses whether the classifier is fair by checking its reliance on sensitive features using "Lime explanations". If deemed unfair, LimeOut then applies feature dropout to obtain a pool of classifiers. These are then combined into an ensemble classifier that was empirically shown to be less dependent on sensitive features without compromising the classifier's accuracy. We present different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut's classifiers improve (or at least maintain) not only process fairness but also other fairness metrics such as individual and group fairness, equal opportunity, and demographic parity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2020

LimeOut: An Ensemble Approach To Improve Process Fairness

Artificial Intelligence and Machine Learning are becoming increasingly p...
research
08/05/2021

Reducing Unintended Bias of ML Models on Tabular and Textual Data

Unintended biases in machine learning (ML) models are among the major co...
research
12/02/2019

Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Multiple fairness constraints have been proposed in the literature, moti...
research
02/02/2022

Fairness of Machine Learning Algorithms in Demography

The paper is devoted to the study of the model fairness and process fair...
research
09/20/2021

Algorithmic Fairness Verification with Graphical Models

In recent years, machine learning (ML) algorithms have been deployed in ...
research
05/25/2023

Monitoring Algorithmic Fairness

Machine-learned systems are in widespread use for making decisions about...
research
09/27/2022

Explainable Global Fairness Verification of Tree-Based Classifiers

We present a new approach to the global fairness verification of tree-ba...

Please sign up or login with your details

Forgot password? Click here to reset