Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?

07/28/2023
by   Romain Egele, et al.
0

Hyperparameter optimization (HPO) is crucial for fine-tuning machine learning models but can be computationally expensive. To reduce costs, Multi-fidelity HPO (MF-HPO) leverages intermediate accuracy levels in the learning process and discards low-performing models early on. We compared various representative MF-HPO methods against a simple baseline on classical benchmark data. The baseline involved discarding all models except the Top-K after training for only one epoch, followed by further training to select the best model. Surprisingly, this baseline achieved similar results to its counterparts, while requiring an order of magnitude less computation. Upon analyzing the learning curves of the benchmark data, we observed a few dominant learning curves, which explained the success of our baseline. This suggests that researchers should (1) always use the suggested baseline in benchmarks and (2) broaden the diversity of MF-HPO benchmarks to include more complex cases.

READ FULL TEXT

page 16

page 17

page 19

page 21

page 23

page 24

research
09/14/2021

HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

To achieve peak predictive performance, hyperparameter optimization (HPO...
research
11/12/2021

A Simple and Fast Baseline for Tuning Large XGBoost Models

XGBoost, a scalable tree boosting algorithm, has proven effective for ma...
research
09/26/2022

Improving Multi-fidelity Optimization with a Recurring Learning Rate for Hyperparameter Tuning

Despite the evolution of Convolutional Neural Networks (CNNs), their per...
research
04/28/2022

A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models

The goal of Quality Diversity Optimization is to generate a collection o...
research
01/29/2021

A linearized framework and a new benchmark for model selection for fine-tuning

Fine-tuning from a collection of models pre-trained on different domains...
research
07/14/2022

PASHA: Efficient HPO with Progressive Resource Allocation

Hyperparameter optimization (HPO) and neural architecture search (NAS) a...
research
10/10/2022

Multi-step Planning for Automated Hyperparameter Optimization with OptFormer

As machine learning permeates more industries and models become more exp...

Please sign up or login with your details

Forgot password? Click here to reset