Stochastic Variance Reduced Gradient for affine rank minimization problem

11/05/2022
by   Ningning Han, et al.
0

We develop an efficient stochastic variance reduced gradient descent algorithm to solve the affine rank minimization problem consists of finding a matrix of minimum rank from linear measurements. The proposed algorithm as a stochastic gradient descent strategy enjoys a more favorable complexity than full gradients. It also reduces the variance of the stochastic gradient at each iteration and accelerate the rate of convergence. We prove that the proposed algorithm converges linearly in expectation to the solution under a restricted isometry condition. The numerical experiments show that the proposed algorithm has a clearly advantageous balance of efficiency, adaptivity, and accuracy compared with other state-of-the-art greedy algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset