Fast Hyperparameter Optimization of Deep Neural Networks via Ensembling Multiple Surrogates

11/06/2018
by   Yang Li, et al.
0

The performance of deep neural networks crucially depends on good hyperparameter configurations. Bayesian optimization is a powerful framework for optimizing the hyperparameters of DNNs. These methods need sufficient evaluation data to approximate and minimize the validation error function of hyperparameters. However, the expensive evaluation cost of DNNs leads to very few evaluation data within a limited time, which greatly reduces the efficiency of Bayesian optimization. Besides, the previous researches focus on using the complete evaluation data to conduct Bayesian optimization, and ignore the intermediate evaluation data generated by early stopping methods. To alleviate the insufficient evaluation data problem, we propose a fast hyperparameter optimization method, HOIST, that utilizes both the complete and intermediate evaluation data to accelerate the hyperparameter optimization of DNNs. Specifically, we train multiple basic surrogates to gather information from the mixed evaluation data, and then combine all basic surrogates using weighted bagging to provide an accurate ensemble surrogate. Our empirical studies show that HOIST outperforms the state-of-the-art approaches on a wide range of DNNs, including feed forward neural networks, convolutional neural networks, recurrent neural networks, and variational autoencoder.

READ FULL TEXT
research
07/04/2018

BOHB: Robust and Efficient Hyperparameter Optimization at Scale

Modern deep learning methods are very sensitive to many hyperparameters,...
research
07/28/2016

Efficient Hyperparameter Optimization of Deep Learning Algorithms Using Deterministic RBF Surrogates

Automatically searching for optimal hyperparameter configurations is of ...
research
05/23/2016

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

Bayesian optimization has become a successful tool for hyperparameter op...
research
02/24/2022

DC and SA: Robust and Efficient Hyperparameter Optimization of Multi-subnetwork Deep Learning Models

We present two novel hyperparameter optimization strategies for optimiza...
research
05/13/2019

Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization

Due to the high computational demands executing a rigorous comparison be...
research
02/06/2018

Scalable Meta-Learning for Bayesian Optimization

Bayesian optimization has become a standard technique for hyperparameter...
research
06/25/2022

Bayesian Optimization Over Iterative Learners with Structured Responses: A Budget-aware Planning Approach

The rising growth of deep neural networks (DNNs) and datasets in size mo...

Please sign up or login with your details

Forgot password? Click here to reset