Evaluating Bayes Error Estimators on Read-World Datasets with FeeBee

by   Cedric Renggli, et al.

The Bayes error rate (BER) is a fundamental concept in machine learning that quantifies the best possible accuracy any classifier can achieve on a fixed probability distribution. Despite years of research on building estimators of lower and upper bounds for the BER, these were usually compared only on synthetic datasets with known probability distributions, leaving two key questions unanswered: (1) How well do they perform on real-world datasets?, and (2) How practical are they? Answering these is not trivial. Apart from the obvious challenge of an unknown BER for real-world datasets, there are two main aspects any BER estimator needs to overcome in order to be applicable in real-world settings: (1) the computational and sample complexity, and (2) the sensitivity and selection of hyper-parameters. In this work, we propose FeeBee, the first principled framework for analyzing and comparing BER estimators on any modern real-world dataset with unknown probability distribution. We achieve this by injecting a controlled amount of label noise and performing multiple evaluations on a series of different noise levels, supported by a theoretical result which allows drawing conclusions about the evolution of the BER. By implementing and analyzing 7 multi-class BER estimators on 6 commonly used datasets of the computer vision and NLP domains, FeeBee allows a thorough study of these estimators, clearly identifying strengths and weaknesses of each, whilst being easily deployable on any future BER estimator.


page 1

page 2

page 3

page 4


Distribution Regression Network

We introduce our Distribution Regression Network (DRN) which performs re...

Learning to Benchmark: Determining Best Achievable Misclassification Error from Training Data

We address the problem of learning to benchmark the best achievable clas...

GMM Discriminant Analysis with Noisy Label for Each Class

Real world datasets often contain noisy labels, and learning from such d...

On Automatic Feasibility Study for Machine Learning Application Development with ease.ml/snoopy

In our experience working with domain experts who are using today's Auto...

Selecting Near-Optimal Learners via Incremental Data Allocation

We study a novel machine learning (ML) problem setting of sequentially a...

PDPK: A Framework to Synthesise Process Data and Corresponding Procedural Knowledge for Manufacturing

Procedural knowledge describes how to accomplish tasks and mitigate prob...

Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

We algorithmically identify label errors in the test sets of 10 of the m...

Please sign up or login with your details

Forgot password? Click here to reset