Sparse principal component analysis via random projections

by   Milana Gataric, et al.

We introduce a new method for sparse principal component analysis, based on the aggregation of eigenvector information from carefully-selected random projections of the sample covariance matrix. Unlike most alternative approaches, our algorithm is non-iterative, so is not vulnerable to a bad choice of initialisation. Our theory provides great detail on the statistical and computational trade-off in our procedure, revealing a subtle interplay between the effective sample size and the number of random projections that are required to achieve the minimax optimal rate. Numerical studies provide further insight into the procedure and confirm its highly competitive finite-sample performance.


page 1

page 2

page 3

page 4


On the Noise-Information Separation of a Private Principal Component Analysis Scheme

In a survey disclosure model, we consider an additive noise privacy mech...

Generalized Spherical Principal Component Analysis

Outliers contaminating data sets are a challenge to statistical estimato...

Optimal detection of sparse principal components in high dimension

We perform a finite sample analysis of the detection levels for sparse p...

Recovering the Underlying Trajectory from Sparse and Irregular Longitudinal Data

In this article, we consider the problem of recovering the underlying tr...

Lazy stochastic principal component analysis

Stochastic principal component analysis (SPCA) has become a popular dime...

A Fast deflation Method for Sparse Principal Component Analysis via Subspace Projections

Deflation method is an iterative technique that searches the sparse load...

High Temperature Structure Detection in Ferromagnets

This paper studies structure detection problems in high temperature ferr...

Please sign up or login with your details

Forgot password? Click here to reset