Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm

by   Fei Wen, et al.

Matrix completion has attracted much interest in the past decade in machine learning and computer vision. For low-rank promotion in matrix completion, the nuclear norm penalty is convenient due to its convexity but has a bias problem. Recently, various algorithms using nonconvex penalties have been proposed, among which the proximal gradient descent (PGD) algorithm is one of the most efficient and effective. For the nonconvex PGD algorithm, whether it converges to a local minimizer and its convergence rate are still unclear. This work provides a nontrivial analysis on the PGD algorithm in the nonconvex case. Besides the convergence to a stationary point for a generalized nonconvex penalty, we provide more deep analysis on a popular and important class of nonconvex penalties which have discontinuous thresholding functions. For such penalties, we establish the finite rank convergence, convergence to restricted strictly local minimizer and eventually linear convergence rate of the PGD algorithm. Meanwhile, convergence to a local minimizer has been proved for the hard-thresholding penalty. Our result is the first shows that, nonconvex regularized matrix completion only has restricted strictly local minimizers, and the PGD algorithm can converge to such minimizers with eventually linear rate under certain conditions. Illustration of the PGD algorithm via experiments has also been provided. Code is available at


page 1

page 8

page 9

page 10


Matrix Completion with Nonconvex Regularization: Spectral Operators and Scalable Algorithms

In this paper, we study the popularly dubbed matrix completion problem, ...

Binary matrix completion with nonconvex regularizers

Many practical problems involve the recovery of a binary matrix from par...

On Local Convergence of Iterative Hard Thresholding for Matrix Completion

Iterative hard thresholding (IHT) has gained in popularity over the past...

Positive Definite Estimation of Large Covariance Matrix Using Generalized Nonconvex Penalties

This work addresses the issue of large covariance matrix estimation in h...

Low Rank Vectorized Hankel Lift for Matrix Recovery via Fast Iterative Hard Thresholding

We propose a VHL-FIHT algorithm for matrix recovery in blind super-resol...

LRSVRG-IMC: An SVRG-Based Algorithm for LowRank Inductive Matrix Completion

Low-rank inductive matrix completion (IMC) is currently widely used in I...

Accelerated and Inexact Soft-Impute for Large-Scale Matrix and Tensor Completion

Matrix and tensor completion aim to recover a low-rank matrix / tensor f...

Please sign up or login with your details

Forgot password? Click here to reset