How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

10/19/2020
by   Yuanhao Xiong, et al.
0

Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2021

Surrogate Model Based Hyperparameter Tuning for Deep Learning with SPOT

A surrogate model based hyperparameter tuning approach for deep learning...
research
01/26/2019

A Practical Bandit Method with Advantages in Neural Network Tuning

Stochastic bandit algorithms can be used for challenging non-convex opti...
research
06/12/2023

Benchmarking Neural Network Training Algorithms

Training algorithms, broadly construed, are an essential part of every d...
research
07/06/2021

Intrinsic uncertainties and where to find them

We introduce a framework for uncertainty estimation that both describes ...
research
12/17/2022

On Noisy Evaluation in Federated Hyperparameter Tuning

Hyperparameter tuning is critical to the success of federated learning a...
research
04/12/2019

An Empirical Evaluation Study on the Training of SDC Features for Dense Pixel Matching

Training a deep neural network is a non-trivial task. Not only the tunin...
research
06/02/2020

Hyperparameter Selection for Subsampling Bootstraps

Massive data analysis becomes increasingly prevalent, subsampling method...

Please sign up or login with your details

Forgot password? Click here to reset