Large Random Forests: Optimisation for Rapid Evaluation

12/23/2019
by   Frederik Gossen, et al.
0

Random Forests are one of the most popular classifiers in machine learning. The larger they are, the more precise is the outcome of their predictions. However, this comes at a cost: their running time for classification grows linearly with the number of trees, i.e. the size of the forest. In this paper, we propose a method to aggregate large Random Forests into a single, semantically equivalent decision diagram. Our experiments on various popular datasets show speed-ups of several orders of magnitude, while, at the same time, also significantly reducing the size of the required data structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2018

Random Hinge Forest for Differentiable Learning

We propose random hinge forests, a simple, efficient, and novel variant ...
research
04/07/2022

Q-learning with online random forests

Q-learning is the most fundamental model-free reinforcement learning alg...
research
05/25/2019

Asymptotic Distributions and Rates of Convergence for Random Forests and other Resampled Ensemble Learners

Random forests remain among the most popular off-the-shelf supervised le...
research
07/22/2015

Banzhaf Random Forests

Random forests are a type of ensemble method which makes predictions by ...
research
07/11/2020

Towards Robust Classification with Deep Generative Forests

Decision Trees and Random Forests are among the most widely used machine...
research
10/31/2020

Estimating County-Level COVID-19 Exponential Growth Rates Using Generalized Random Forests

Rapid and accurate detection of community outbreaks is critical to addre...
research
06/29/2017

Generalising Random Forest Parameter Optimisation to Include Stability and Cost

Random forests are among the most popular classification and regression ...

Please sign up or login with your details

Forgot password? Click here to reset