Fair Contextual Multi-Armed Bandits: Theory and Experiments

12/13/2019
by   Yifang Chen, et al.
0

When an AI system interacts with multiple users, it frequently needs to make allocation decisions. For instance, a virtual agent decides whom to pay attention to in a group setting, or a factory robot selects a worker to deliver a part. Demonstrating fairness in decision making is essential for such systems to be broadly accepted. We introduce a Multi-Armed Bandit algorithm with fairness constraints, where fairness is defined as a minimum rate that a task or a resource is assigned to a user. The proposed algorithm uses contextual information about the users and the task and makes no assumptions on how the losses capturing the performance of different users are generated. We provide theoretical guarantees of performance and empirical results from simulation and an online user study. The results highlight the benefit of accounting for contexts in fair decision making, especially when users perform better at some contexts and worse at others.

READ FULL TEXT
research
02/18/2023

Improving Fairness in Adaptive Social Exergames via Shapley Bandits

Algorithmic fairness is an essential requirement as AI becomes integrate...
research
06/30/2019

Reinforcement Learning with Fairness Constraints for Resource Distribution in Human-Robot Teams

Much work in robotics and operations research has focused on optimal res...
research
02/08/2021

Counterfactual Contextual Multi-Armed Bandit: a Real-World Application to Diagnose Apple Diseases

Post-harvest diseases of apple are one of the major issues in the econom...
research
06/08/2022

Efficient Resource Allocation with Fairness Constraints in Restless Multi-Armed Bandits

Restless Multi-Armed Bandits (RMAB) is an apt model to represent decisio...
research
06/23/2023

Trading-off price for data quality to achieve fair online allocation

We consider the problem of online allocation subject to a long-term fair...
research
05/14/2023

Multi-View Interactive Collaborative Filtering

In many scenarios, recommender system user interaction data such as clic...
research
08/17/2023

Equitable Restless Multi-Armed Bandits: A General Framework Inspired By Digital Health

Restless multi-armed bandits (RMABs) are a popular framework for algorit...

Please sign up or login with your details

Forgot password? Click here to reset