Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides

06/08/2020
by   Akshat Pandey, et al.
0

Algorithmic bias is the systematic preferential or discriminatory treatment of a group of people by an artificial intelligence system. In this work we develop a random-effects based metric for the analysis of social bias in supervised machine learning prediction models where model outputs depend on U.S. locations. We define a methodology for using U.S. Census data to measure social bias on user attributes legally protected against discrimination, such as ethnicity, sex, and religion, also known as protected attributes. We evaluate our method on the Strategic Subject List (SSL) gun-violence prediction dataset, where we have access to both U.S. Census data as well as ground truth protected attributes for 224,235 individuals in Chicago being assessed for participation in future gun-violence incidents. Our results indicate that quantifying social bias using U.S. Census data provides a valid approach to auditing a supervised algorithmic decision-making system. Using our methodology, we then quantify the potential social biases of 100 million ridehailing samples in the city of Chicago. This work is the first large-scale fairness analysis of the dynamic pricing algorithms used by ridehailing applications. An analysis of Chicago ridehailing samples in conjunction with American Community Survey data indicates possible disparate impact due to social bias based on age, house pricing, education, and ethnicity in the dynamic fare pricing models used by ridehailing applications, with effect-sizes of 0.74, 0.70, 0.34, and 0.31 (using Cohen's d) for each demographic respectively. Further, our methodology provides a principled approach to quantifying algorithmic bias on datasets where protected attributes are unavailable, given that U.S. geolocations and algorithmic decisions are provided.

READ FULL TEXT
research
04/30/2019

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

Recently, policymakers, regulators, and advocates have raised awareness ...
research
09/02/2022

A Discussion of Discrimination and Fairness in Insurance Pricing

Indirect discrimination is an issue of major concern in algorithmic mode...
research
04/10/2019

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

There is a growing body of work that proposes methods for mitigating bia...
research
03/14/2023

Demographic Parity Inspector: Fairness Audits via the Explanation Space

Even if deployed with the best intentions, machine learning methods can ...
research
06/14/2022

ABCinML: Anticipatory Bias Correction in Machine Learning Applications

The idealization of a static machine-learned model, trained once and dep...
research
02/24/2022

A fair pricing model via adversarial learning

At the core of insurance business lies classification between risky and ...
research
02/24/2021

Directional Bias Amplification

Mitigating bias in machine learning systems requires refining our unders...

Please sign up or login with your details

Forgot password? Click here to reset