Fairness Without Demographics in Repeated Loss Minimization

06/20/2018
by   Tatsunori B. Hashimoto, et al.
0

Machine learning models (e.g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity---minority groups (e.g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even make initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.

READ FULL TEXT
research
05/02/2019

Long term impact of fair machine learning in sequential decision making: representation disparity and group retention

Machine learning models trained on data from multiple demographic groups...
research
06/07/2022

How does overparametrization affect performance on minority groups?

The benefits of overparameterization for the overall performance of mode...
research
11/20/2019

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization

Overparameterized neural networks can be highly accurate on average on a...
research
04/13/2022

Estimating Structural Disparities for Face Models

In machine learning, disparity metrics are often defined by measuring th...
research
05/20/2023

Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization

Models trained with empirical risk minimization (ERM) are revealed to ea...
research
08/29/2018

Group calibration is a byproduct of unconstrained learning

Much recent work on fairness in machine learning has focused on how well...
research
11/16/2019

Fairness With Minimal Harm: A Pareto-Optimal Approach For Healthcare

Common fairness definitions in machine learning focus on balancing notio...

Please sign up or login with your details

Forgot password? Click here to reset