Amazon SageMaker Automatic Model Tuning: Scalable Black-box Optimization

12/15/2020
by   Valerio Perrone, et al.
0

Tuning complex machine learning systems is challenging. Machine learning models typically expose a set of hyperparameters, be it regularization, architecture, or optimization parameters, whose careful tuning is critical to achieve good performance. To democratize access to such systems, it is essential to automate this tuning process. This paper presents Amazon SageMaker Automatic Model Tuning (AMT), a fully managed system for black-box optimization at scale. AMT finds the best version of a machine learning model by repeatedly training it with different hyperparameter configurations. It leverages either random search or Bayesian optimization to choose the hyperparameter values resulting in the best-performing model, as measured by the metric chosen by the user. AMT can be used with built-in algorithms, custom algorithms, and Amazon SageMaker pre-built containers for machine learning frameworks. We discuss the core functionality, system architecture and our design principles. We also describe some more advanced features provided by AMT, such as automated early stopping and warm-starting, demonstrating their benefits in experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2023

SigOpt Mulch: An Intelligent System for AutoML of Gradient Boosted Trees

Gradient boosted trees (GBTs) are ubiquitous models used by researchers,...
research
08/19/2019

Towards Assessing the Impact of Bayesian Optimization's Own Hyperparameters

Bayesian Optimization (BO) is a common approach for hyperparameter optim...
research
07/07/2022

Pre-training helps Bayesian optimization too

Bayesian optimization (BO) has become a popular strategy for global opti...
research
08/04/2022

ACE: Adaptive Constraint-aware Early Stopping in Hyperparameter Optimization

Deploying machine learning models requires high model quality and needs ...
research
02/21/2019

Bayes Optimal Early Stopping Policies for Black-Box Optimization

We derive an optimal policy for adaptively restarting a randomized algor...
research
06/11/2021

HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML

Hyperparameter optimization (HPO) is a core problem for the machine lear...

Please sign up or login with your details

Forgot password? Click here to reset