Data Banzhaf: A Data Valuation Framework with Maximal Robustness to Learning Stochasticity

05/30/2022
by   Tianhao Wang, et al.
6

This paper studies the robustness of data valuation to noisy model performance scores. Particularly, we find that the inherent randomness of the widely used stochastic gradient descent can cause existing data value notions (e.g., the Shapley value and the Leave-one-out error) to produce inconsistent data value rankings across different runs. To address this challenge, we first pose a formal framework within which one can measure the robustness of a data value notion. We show that the Banzhaf value, a value notion originated from cooperative game theory literature, achieves the maximal robustness among all semivalues – a class of value notions that satisfy crucial properties entailed by ML applications. We propose an algorithm to efficiently estimate the Banzhaf value based on the Maximum Sample Reuse (MSR) principle. We derive the lower bound sample complexity for Banzhaf value estimation, and we show that our MSR algorithm's sample complexity is close to the lower bound. Our evaluation demonstrates that the Banzhaf value outperforms the existing semivalue-based data value notions on several downstream ML tasks such as learning with weighted samples and noisy label detection. Overall, our study suggests that when the underlying ML algorithm is stochastic, the Banzhaf value is a promising alternative to the semivalue-based data value schemes given its computational advantage and ability to robustly differentiate data quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2020

Agnostic Q-learning with Function Approximation in Deterministic Systems: Tight Bounds on Approximation Error and Sample Complexity

The current paper studies the problem of agnostic Q-learning with functi...
research
07/13/2023

Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise

We study the problem of learning general (i.e., not necessarily homogene...
research
03/24/2021

Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation

Policy optimization methods are popular reinforcement learning algorithm...
research
11/17/2019

An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms

This paper focuses on valuating training data for supervised learning ta...
research
06/10/2021

A Unified Framework for Task-Driven Data Quality Management

High-quality data is critical to train performant Machine Learning (ML) ...
research
02/12/2023

A Theoretical Understanding of shallow Vision Transformers: Learning, Generalization, and Sample Complexity

Vision Transformers (ViTs) with self-attention modules have recently ach...
research
02/22/2023

A Note on "Towards Efficient Data Valuation Based on the Shapley Value”

The Shapley value (SV) has emerged as a promising method for data valuat...

Please sign up or login with your details

Forgot password? Click here to reset