Evaluating Bayes Error Estimators on Read-World Datasets with FeeBee

08/30/2021
by   Cedric Renggli, et al.
0

The Bayes error rate (BER) is a fundamental concept in machine learning that quantifies the best possible accuracy any classifier can achieve on a fixed probability distribution. Despite years of research on building estimators of lower and upper bounds for the BER, these were usually compared only on synthetic datasets with known probability distributions, leaving two key questions unanswered: (1) How well do they perform on real-world datasets?, and (2) How practical are they? Answering these is not trivial. Apart from the obvious challenge of an unknown BER for real-world datasets, there are two main aspects any BER estimator needs to overcome in order to be applicable in real-world settings: (1) the computational and sample complexity, and (2) the sensitivity and selection of hyper-parameters. In this work, we propose FeeBee, the first principled framework for analyzing and comparing BER estimators on any modern real-world dataset with unknown probability distribution. We achieve this by injecting a controlled amount of label noise and performing multiple evaluations on a series of different noise levels, supported by a theoretical result which allows drawing conclusions about the evolution of the BER. By implementing and analyzing 7 multi-class BER estimators on 6 commonly used datasets of the computer vision and NLP domains, FeeBee allows a thorough study of these estimators, clearly identifying strengths and weaknesses of each, whilst being easily deployable on any future BER estimator.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2018

Distribution Regression Network

We introduce our Distribution Regression Network (DRN) which performs re...
research
09/16/2019

Learning to Benchmark: Determining Best Achievable Misclassification Error from Training Data

We address the problem of learning to benchmark the best achievable clas...
research
01/25/2022

GMM Discriminant Analysis with Noisy Label for Each Class

Real world datasets often contain noisy labels, and learning from such d...
research
10/16/2020

On Automatic Feasibility Study for Machine Learning Application Development with ease.ml/snoopy

In our experience working with domain experts who are using today's Auto...
research
12/31/2015

Selecting Near-Optimal Learners via Incremental Data Allocation

We study a novel machine learning (ML) problem setting of sequentially a...
research
08/16/2023

PDPK: A Framework to Synthesise Process Data and Corresponding Procedural Knowledge for Manufacturing

Procedural knowledge describes how to accomplish tasks and mitigate prob...
research
03/26/2021

Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

We algorithmically identify label errors in the test sets of 10 of the m...

Please sign up or login with your details

Forgot password? Click here to reset