Selecting Near-Optimal Learners via Incremental Data Allocation

by   Ashish Sabharwal, et al.
Allen Institute for Artificial Intelligence

We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper-parameters. Inspired by the principle of "optimism under uncertainty," we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.


Behavior of Hyper-Parameters for Selected Machine Learning Algorithms: An Empirical Investigation

Hyper-parameters (HPs) are an important part of machine learning (ML) mo...

Try Before You Buy: A practical data purchasing algorithm for real-world data marketplaces

Data trading is becoming increasingly popular, as evident by the appeara...

New Insights into Bootstrapping for Bandits

We investigate the use of bootstrapping in the bandit setting. We first ...

Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting

We consider the optimal value of information (VoI) problem, where the go...

Certifiable Robustness for Naive Bayes Classifiers

Data cleaning is crucial but often laborious in most machine learning (M...

Evaluating Bayes Error Estimators on Read-World Datasets with FeeBee

The Bayes error rate (BER) is a fundamental concept in machine learning ...

Algorithmic Recourse in the Face of Noisy Human Responses

As machine learning (ML) models are increasingly being deployed in high-...

Please sign up or login with your details

Forgot password? Click here to reset