On Asymptotic Linear Convergence of Projected Gradient Descent for Constrained Least Squares

by   Trung Vu, et al.

Many recent problems in signal processing and machine learning such as compressed sensing, image restoration, matrix/tensor recovery, and non-negative matrix factorization can be cast as constrained optimization. Projected gradient descent is a simple yet efficient method for solving such constrained optimization problems. Local convergence analysis furthers our understanding of its asymptotic behavior near the solution, offering sharper bounds on the convergence rate compared to global convergence analysis. However, local guarantees often appear scattered in problem-specific areas of machine learning and signal processing. This manuscript presents a unified framework for the local convergence analysis of projected gradient descent in the context of constrained least squares. The proposed analysis offers insights into pivotal local convergence properties such as the condition of linear convergence, the region of convergence, the exact asymptotic rate of convergence, and the bound on the number of iterations needed to reach a certain level of accuracy. To demonstrate the applicability of the proposed approach, we present a recipe for the convergence analysis of PGD and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems, namely, linearly constrained least squares, sparse recovery, least squares with the unit norm constraint, and matrix completion.


page 4

page 7

page 8

page 9

page 11

page 13

page 14

page 15


Exact Linear Convergence Rate Analysis for Low-Rank Symmetric Matrix Completion via Gradient Descent

Factorization-based gradient descent is a scalable and efficient algorit...

A Unifying Analysis of Projected Gradient Descent for ℓ_p-constrained Least Squares

In this paper we study the performance of the Projected Gradient Descent...

Provable Burer-Monteiro factorization for a class of norm-constrained matrix problems

We study the projected gradient descent method on low-rank matrix proble...

Low-Complexity Iterative Methods for Complex-Variable Matrix Optimization Problems in Frobenius Norm

Complex-variable matrix optimization problems (CMOPs) in Frobenius norm ...

IRLS and Slime Mold: Equivalence and Convergence

In this paper we present a connection between two dynamical systems aris...

Iteratively Reweighted Least Squares for ℓ_1-minimization with Global Linear Convergence Rate

Iteratively Reweighted Least Squares (IRLS), whose history goes back mor...

Data-driven Algorithm Selection and Parameter Tuning: Two Case studies in Optimization and Signal Processing

Machine learning algorithms typically rely on optimization subroutines a...

Please sign up or login with your details

Forgot password? Click here to reset