Fused-Lasso Regularized Cholesky Factors of Large Nonstationary Covariance Matrices of Longitudinal Data

by   Aramayis Dallakyan, et al.

Smoothness of the subdiagonals of the Cholesky factor of large covariance matrices is closely related to the degrees of nonstationarity of autoregressive models for time series and longitudinal data. Heuristically, one expects for a nearly stationary covariance matrix the entries in each subdiagonal of the Cholesky factor of its inverse to be nearly the same in the sense that sum of absolute values of successive terms is small. Statistically such smoothness is achieved by regularizing each subdiagonal using fused-type lasso penalties. We rely on the standard Cholesky factor as the new parameters within a regularized normal likelihood setup which guarantees: (1) joint convexity of the likelihood function, (2) strict convexity of the likelihood function restricted to each subdiagonal even when n<p, and (3) positive-definiteness of the estimated covariance matrix. A block coordinate descent algorithm, where each block is a subdiagonal, is proposed and its convergence is established under mild conditions. Lack of decoupling of the penalized likelihood function into a sum of functions involving individual subdiagonals gives rise to some computational challenges and advantages relative to two recent algorithms for sparse estimation of the Cholesky factor which decouple row-wise. Simulation results and real data analysis show the scope and good performance of the proposed methodology.


page 20

page 21


Forecasting Large Realized Covariance Matrices: The Benefits of Factor Models and Shrinkage

We propose a model to forecast large realized covariance matrices of ret...

Inverse covariance operators of multivariate nonstationary time series

For multivariate stationary time series many important properties, such ...

The Graphical Lasso: New Insights and Alternatives

The graphical lasso FHT2007a is an algorithm for learning the structure ...

Algorithme EM régularisé

Expectation-Maximization (EM) algorithm is a widely used iterative algor...

Learning Bayesian Networks through Birkhoff Polytope: A Relaxation Method

We establish a novel framework for learning a directed acyclic graph (DA...

Shrinking the Sample Covariance Matrix using Convex Penalties on the Matrix-Log Transformation

For q-dimensional data, penalized versions of the sample covariance matr...

Please sign up or login with your details

Forgot password? Click here to reset