Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation From Undersampled Data

by   Dejiao Zhang, et al.

Subspace learning and matrix factorization problems have a great many applications in science and engineering, and efficient algorithms are critical as dataset sizes continue to grow. Many relevant problem formulations are non-convex, and in a variety of contexts it has been observed that solving the non-convex problem directly is not only efficient but reliably accurate. We discuss convergence theory for a particular method: first order incremental gradient descent constrained to the Grassmannian. The output of the algorithm is an orthonormal basis for a d-dimensional subspace spanned by an input streaming data matrix. We study two sampling cases: where each data vector of the streaming matrix is fully sampled, or where it is undersampled by a sampling matrix A_t∈^m× n with m≪ n. We propose an adaptive stepsize scheme that depends only on the sampled data and algorithm outputs. We prove that with fully sampled data, the stepsize scheme maximizes the improvement of our convergence metric at each iteration, and this method converges from any random initialization to the true subspace, despite the non-convex formulation and orthogonality constraints. For the case of undersampled data, we establish monotonic improvement on the defined convergence metric for each iteration with high probability.


page 13

page 14


Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation

It has been observed in a variety of contexts that gradient descent meth...

A Well-Tempered Landscape for Non-convex Robust Subspace Recovery

We present a mathematical analysis of a non-convex energy landscape for ...

Fast and Minimax Optimal Estimation of Low-Rank Matrices via Non-Convex Gradient Descent

We study the problem of estimating a low-rank matrix from noisy measurem...

Asymmetric matrix sensing by gradient descent with small random initialization

We study matrix sensing, which is the problem of reconstructing a low-ra...

Randomized Fast Subspace Descent Methods

Randomized Fast Subspace Descent (RFASD) Methods are developed and analy...

Gradient Descent Happens in a Tiny Subspace

We show that in a variety of large-scale deep learning scenarios the gra...

High-Dimensional Optimization in Adaptive Random Subspaces

We propose a new randomized optimization method for high-dimensional pro...

Please sign up or login with your details

Forgot password? Click here to reset