Streaming regularization parameter selection via stochastic gradient descent

by   Ricardo Pio Monti, et al.

We propose a framework to perform streaming covariance selection. Our approach employs regularization constraints where a time-varying sparsity parameter is iteratively estimated via stochastic gradient descent. This allows for the regularization parameter to be efficiently learnt in an online manner. The proposed framework is developed for linear regression models and extended to graphical models via neighbourhood selection. Under mild assumptions, we are able to obtain convergence results in a non-stochastic setting. The capabilities of such an approach are demonstrated using both synthetic data as well as neuroimaging data.


Adaptive regularization for Lasso models in the context of non-stationary data streams

Large scale, streaming datasets are ubiquitous in modern machine learnin...

The Statistics of Streaming Sparse Regression

We present a sparse analogue to stochastic gradient descent that is guar...

Towards the interpretation of time-varying regularization parameters in streaming penalized regression models

High-dimensional, streaming datasets are ubiquitous in modern applicatio...

Learning from time-dependent streaming data with online stochastic algorithms

We study stochastic algorithms in a streaming framework, trained on samp...

Streaming Sparse Linear Regression

Sparse regression has been a popular approach to perform variable select...

Debiasing Stochastic Gradient Descent to handle missing values

A major caveat of large scale data is their incom-pleteness. We propose ...

Kernel Clustering with Sigmoid-based Regularization for Efficient Segmentation of Sequential Data

Kernel segmentation aims at partitioning a data sequence into several no...

Please sign up or login with your details

Forgot password? Click here to reset