Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems

by   Yuchang Jiang, et al.

Data imbalance is ubiquitous when applying machine learning to real-world problems, particularly regression problems. If training data are imbalanced, the learning is dominated by the densely covered regions of the target distribution, consequently, the learned regressor tends to exhibit poor performance in sparsely covered regions. Beyond standard measures like over-sampling or re-weighting, there are two main directions to handle learning from imbalanced data. For regression, recent work relies on the continuity of the distribution; whereas for classification there has been a trend to employ mixture-of-expert models and let some ensemble members specialize in predictions for the sparser regions. Here, we adapt the mixture-of-experts approach to the regression setting. A main question when using this approach is how to fuse the predictions from multiple experts into one output. Drawing inspiration from recent work on probabilistic deep learning, we propose to base the fusion on the aleatoric uncertainties of individual experts, thus obviating the need for a separate aggregation module. In our method, dubbed MOUV, each expert predicts not only an output value but also its uncertainty, which in turn serves as a statistically motivated criterion to rely on the right experts. We compare our method with existing alternatives on multiple public benchmarks and show that MOUV consistently outperforms the prior art, while at the same time producing better calibrated uncertainty estimates. Our code is available at link-upon-publication.


page 1

page 2

page 3

page 4


Variational Imbalanced Regression

Existing regression models tend to fall short in both accuracy and uncer...

Graph Classification by Mixture of Diverse Experts

Graph classification is a challenging research problem in many applicati...

Long-Tailed Recognition Using Class-Balanced Experts

Classic deep learning methods achieve impressive results in image recogn...

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Real-world large-scale datasets are heteroskedastic and imbalanced – lab...

Improving Uncertainty Calibration via Prior Augmented Data

Neural networks have proven successful at learning from complex data dis...

ReMix: Calibrated Resampling for Class Imbalance in Deep learning

Class imbalance is a problem of significant importance in applied deep l...

A Normative Model of Classifier Fusion

Combining the outputs of multiple classifiers or experts into a single p...

Please sign up or login with your details

Forgot password? Click here to reset