Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics

02/03/2022
by   Tomo Lazovich, et al.
7

The harmful impacts of algorithmic decision systems have recently come into focus, with many examples of systems such as machine learning (ML) models amplifying existing societal biases. Most metrics attempting to quantify disparities resulting from ML algorithms focus on differences between groups, dividing users based on demographic identities and comparing model performance or overall outcomes between these groups. However, in industry settings, such information is often not available, and inferring these characteristics carries its own risks and biases. Moreover, typical metrics that focus on a single classifier's output ignore the complex network of systems that produce outcomes in real-world settings. In this paper, we evaluate a set of metrics originating from economics, distributional inequality metrics, and their ability to measure disparities in content exposure in a production recommendation system, the Twitter algorithmic timeline. We define desirable criteria for metrics to be used in an operational setting, specifically by ML practitioners. We characterize different types of engagement with content on Twitter using these metrics, and use these results to evaluate the metrics with respect to the desired criteria. We show that we can use these metrics to identify content suggestion algorithms that contribute more strongly to skewed outcomes between users. Overall, we conclude that these metrics can be useful tools for understanding disparate outcomes in online social networks.

READ FULL TEXT
research
06/28/2021

Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics

Measuring bias is key for better understanding and addressing unfairness...
research
07/02/2019

Quantifying Algorithmic Biases over Time

Algorithms now permeate multiple aspects of human lives and multiple rec...
research
04/29/2022

Joint Multisided Exposure Fairness for Recommendation

Prior research on exposure fairness in the context of recommender system...
research
08/16/2019

Automatically Identifying Comparator Groups on Twitter for Digital Epidemiology of Pregnancy Outcomes

Despite the prevalence of adverse pregnancy outcomes such as miscarriage...
research
06/13/2021

FairCanary: Rapid Continuous Explainable Fairness

Machine Learning (ML) models are being used in all facets of today's soc...
research
06/23/2022

Non-Determinism and the Lawlessness of ML Code

Legal literature on machine learning (ML) tends to focus on harms, and a...
research
07/19/2019

Algorithmic Distortion of Informational Landscapes

The possible impact of algorithmic recommendation on the autonomy and fr...

Please sign up or login with your details

Forgot password? Click here to reset