Automatic Gradient Boosting

by   Janek Thomas, et al.

Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference. This is achieved by making machine learning applications parameter-free, i.e. only a dataset is provided while the complete model selection and model building process is handled internally through (often meta) optimization. Projects like Auto-WEKA and auto-sklearn aim to solve the Combined Algorithm Selection and Hyperparameter optimization (CASH) problem resulting in huge configuration spaces. However, for most real-world applications, the optimization over only a few different key learning algorithms can not only be sufficient, but also potentially beneficial. The latter becomes apparent when one considers that models have to be validated, explained, deployed and maintained. Here, less complex model are often preferred, for validation or efficiency reasons, or even a strict requirement. Automatic gradient boosting simplifies this idea one step further, using only gradient boosting as a single learning algorithm in combination with model-based hyperparameter tuning, threshold optimization and encoding of categorical features. We introduce this general framework as well as a concrete implementation called autoxgboost. It is compared to current AutoML projects on 16 datasets and despite its simplicity is able to achieve comparable results on about half of the datasets as well as performing best on two.


page 1

page 2

page 3

page 4


Auto-CASH: Autonomous Classification Algorithm Selection with Deep Q-Network

The great amount of datasets generated by various data sources have pose...

Autotune: A Derivative-free Optimization Framework for Hyperparameter Tuning

Machine learning applications often require hyperparameter tuning. The h...

Hyperboost: Hyperparameter Optimization by Gradient Boosting surrogate models

Bayesian Optimization is a popular tool for tuning algorithms in automat...

CMA-ES for Post Hoc Ensembling in AutoML: A Great Success and Salvageable Failure

Many state-of-the-art automated machine learning (AutoML) systems use gr...

AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications

Feature crossing captures interactions among categorical features and is...

Benchmarking state-of-the-art gradient boosting algorithms for classification

This work explores the use of gradient boosting in the context of classi...

MOFA: Modular Factorial Design for Hyperparameter Optimization

Automated hyperparameter optimization (HPO) has shown great power in man...

Please sign up or login with your details

Forgot password? Click here to reset