MultiFair: Multi-Group Fairness in Machine Learning

by   Jian Kang, et al.

Algorithmic fairness is becoming increasingly important in data mining and machine learning, and one of the most fundamental notions is group fairness. The vast majority of the existing works on group fairness, with a few exceptions, primarily focus on debiasing with respect to a single sensitive attribute, despite the fact that the co-existence of multiple sensitive attributes (e.g., gender, race, marital status, etc.) in the real-world is commonplace. As such, methods that can ensure a fair learning outcome with respect to all sensitive attributes of concern simultaneously need to be developed. In this paper, we study multi-group fairness in machine learning (MultiFair), where statistical parity, a representative group fairness measure, is guaranteed among demographic groups formed by multiple sensitive attributes of interest. We formulate it as a mutual information minimization problem and propose a generic end-to-end algorithmic framework to solve it. The key idea is to leverage a variational representation of mutual information, which considers the variational distribution between learning outcomes and sensitive attributes, as well as the density ratio between the variational and the original distributions. Our proposed framework is generalizable to many different settings, including other statistical notions of fairness, and could handle any type of learning task equipped with a gradient-based optimizer. Empirical evaluations in the fair classification task on three real-world datasets demonstrate that our proposed framework can effectively debias the classification results with minimal impact to the classification accuracy.


page 6

page 8


Learning Fair Models without Sensitive Attributes: A Generative Approach

Most existing fair classifiers rely on sensitive attributes to achieve f...

Unbiased Subdata Selection for Fair Classification: A Unified Framework and Scalable Algorithms

As an important problem in modern data analytics, classification has wit...

Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm

The goal of fairness in classification is to learn a classifier that doe...

Practical Approaches for Fair Learning with Multitype and Multivariate Sensitive Attributes

It is important to guarantee that machine learning algorithms deployed i...

Locating disparities in machine learning

Machine learning was repeatedly proven to provide predictions with dispa...

Learning Fair Classifiers via Min-Max F-divergence Regularization

As machine learning (ML) based systems are adopted in domains such as la...

Group fairness without demographics using social networks

Group fairness is a popular approach to prevent unfavorable treatment of...

Please sign up or login with your details

Forgot password? Click here to reset