On Aggregation in Ensembles of Multilabel Classifiers

06/21/2020
by   Vu-Linh Nguyen, et al.
0

While a variety of ensemble methods for multilabel classification have been proposed in the literature, the question of how to aggregate the predictions of the individual members of the ensemble has received little attention so far. In this paper, we introduce a formal framework of ensemble multilabel classification, in which we distinguish two principal approaches: "predict then combine" (PTC), where the ensemble members first make loss minimizing predictions which are subsequently combined, and "combine then predict" (CTP), which first aggregates information such as marginal label probabilities from the individual ensemble members, and then derives a prediction from this aggregation. While both approaches generalize voting techniques commonly used for multilabel ensembles, they allow to explicitly take the target performance measure into account. Therefore, concrete instantiations of CTP and PTC can be tailored to concrete loss functions. Experimentally, we show that standard voting techniques are indeed outperformed by suitable instantiations of CTP and PTC, and provide some evidence that CTP performs well for decomposable loss functions, whereas PTC is the better choice for non-decomposable losses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2005

Neural network ensembles: Evaluation of aggregation algorithms

Ensembles of artificial neural networks show improved generalization cap...
research
04/08/2019

Optimizing Majority Voting Based Systems Under a Resource Constraint for Multiclass Problems

Ensemble-based approaches are very effective in various fields in raisin...
research
05/24/2022

Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model

Ensembling is a popular method used to improve performance as a last res...
research
07/03/2018

Coopetitive Soft Gating Ensemble

In this article, we proposed the Coopetititve Soft Gating Ensemble or CS...
research
01/14/2021

DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation

Deep ensembles perform better than a single network thanks to the divers...
research
04/17/2020

A stochastic approach to handle knapsack problems in the creation of ensembles

Ensemble-based methods are highly popular approaches that increase the a...
research
11/17/2021

ORSA: Outlier Robust Stacked Aggregation for Best- and Worst-Case Approximations of Ensemble Systems

In recent years, the usage of ensemble learning in applications has grow...

Please sign up or login with your details

Forgot password? Click here to reset