Fairness for Image Generation with Uncertain Sensitive Attributes

06/23/2021
by   Ajil Jalal, et al.
0

This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution, which entail different definitions from the standard classification setting. Moreover, while traditional group fairness definitions are typically defined with respect to specified protected groups – camouflaging the fact that these groupings are artificial and carry historical and political motivations – we emphasize that there are no ground truth identities. For instance, should South and East Asians be viewed as a single group or separate groups? Should we consider one race as a whole or further split by gender? Choosing which groups are valid and who belongs in them is an impossible dilemma and being "fair" with respect to Asians may require being "unfair" with respect to South Asians. This motivates the introduction of definitions that allow algorithms to be oblivious to the relevant groupings. We define several intuitive notions of group fairness and study their incompatibilities and trade-offs. We show that the natural extension of demographic parity is strongly dependent on the grouping, and impossible to achieve obliviously. On the other hand, the conceptually new definition we introduce, Conditional Proportional Representation, can be achieved obliviously through Posterior Sampling. Our experiments validate our theoretical results and achieve fair image reconstruction using state-of-the-art generative models.

READ FULL TEXT

page 2

page 4

page 13

page 14

page 15

page 16

research
12/09/2019

Group Fairness in Bandit Arm Selection

We consider group fairness in the contextual bandit setting. Here, a seq...
research
06/06/2019

Flexibly Fair Representation Learning by Disentanglement

We consider the problem of learning representations that achieve group a...
research
02/17/2023

The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

In this paper we revisit the bias-variance decomposition of model error ...
research
06/18/2020

Towards Threshold Invariant Fair Classification

Effective machine learning models can automatically learn useful informa...
research
05/30/2019

Fair Regression: Quantitative Definitions and Reduction-based Algorithms

In this paper, we study the prediction of a real-valued target, such as ...
research
09/20/2022

Towards Auditing Unsupervised Learning Algorithms and Human Processes For Fairness

Existing work on fairness typically focuses on making known machine lear...
research
06/08/2023

Toward A Logical Theory Of Fairness and Bias

Fairness in machine learning is of considerable interest in recent years...

Please sign up or login with your details

Forgot password? Click here to reset