Causal Multi-Level Fairness

by   Vishwali Mhasawade, et al.

Algorithmic systems are known to impact marginalized groups severely, and more so, if all sources of bias are not considered. While work in algorithmic fairness to-date has primarily focused on addressing discrimination due to individually linked attributes, social science research elucidates how some properties we link to individuals can be conceptualized as having causes at population (e.g. structural/social) levels and it may be important to be fair to attributes at multiple levels. For example, instead of simply considering race as a protected attribute of an individual, it can be thought of as the perceived race of an individual which in turn may be affected by neighborhood-level factors. This multi-level conceptualization is relevant to questions of fairness, as it may not only be important to take into account if the individual belonged to another demographic group, but also if the individual received advantaged treatment at the population-level. In this paper, we formalize the problem of multi-level fairness using tools from causal inference in a manner that allows one to assess and account for effects of sensitive attributes at multiple levels. We show importance of the problem by illustrating residual unfairness if population-level sensitive attributes are not accounted for. Further, in the context of a real-world task of predicting income based on population and individual-level attributes, we demonstrate an approach for mitigating unfairness due to multi-level sensitive attributes.


page 1

page 2

page 3

page 4


M^3Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method

In the data-driven artificial intelligence paradigm, models heavily rely...

Blind Justice: Fairness with Encrypted Sensitive Attributes

Recent work has explored how to train machine learning models which do n...

Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness

Most proposed algorithmic fairness techniques require access to data on ...

Intersectional Fairness: A Fractal Approach

The issue of fairness in AI has received an increasing amount of attenti...

Group fairness without demographics using social networks

Group fairness is a popular approach to prevent unfavorable treatment of...

Avoiding Resentment Via Monotonic Fairness

Classifiers that achieve demographic balance by explicitly using protect...

Models and Mechanisms for Fairness in Location Data Processing

Location data use has become pervasive in the last decade due to the adv...

Please sign up or login with your details

Forgot password? Click here to reset