Estimation and Feature Selection in Mixtures of Generalized Linear Experts Models

07/14/2019
by   Bao-Tuyen Huynh, et al.
1

Mixtures-of-Experts (MoE) are conditional mixture models that have shown their performance in modeling heterogeneity in data in many statistical learning approaches for prediction, including regression and classification, as well as for clustering. Their estimation in high-dimensional problems is still however challenging. We consider the problem of parameter estimation and feature selection in MoE models with different generalized linear experts models, and propose a regularized maximum likelihood estimation that efficiently encourages sparse solutions for heterogeneous data with high-dimensional predictors. The developed proximal-Newton EM algorithm includes proximal Newton-type procedures to update the model parameter by monotonically maximizing the objective function and allows to perform efficient estimation and feature selection. An experimental study shows the good performance of the algorithms in terms of recovering the actual sparse solutions, parameter estimation, and clustering of heterogeneous regression data, compared to the main state-of-the art competitors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2018

Regularized Maximum Likelihood Estimation and Feature Selection in Mixtures-of-Experts Models

Mixture of Experts (MoE) are successful models for modeling heterogeneou...
research
10/07/2018

Sparse Regression with Multi-type Regularized Feature Modeling

Within the statistical and machine learning literature, regularization t...
research
09/12/2019

Regularized Estimation and Feature Selection in Mixtures of Gaussian-Gated Experts Models

Mixtures-of-Experts models and their maximum likelihood estimation (MLE)...
research
09/22/2020

An l_1-oracle inequality for the Lasso in mixture-of-experts regression models

Mixture-of-experts (MoE) models are a popular framework for modeling het...
research
05/23/2012

Efficient Sparse Group Feature Selection via Nonconvex Optimization

Sparse feature selection has been demonstrated to be effective in handli...
research
04/06/2021

A non-asymptotic penalization criterion for model selection in mixture of experts models

Mixture of experts (MoE) is a popular class of models in statistics and ...
research
02/28/2022

Functional mixture-of-experts for classification

We develop a mixtures-of-experts (ME) approach to the multiclass classif...

Please sign up or login with your details

Forgot password? Click here to reset