MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate

02/21/2022
by   Aixiang, et al.
0

The fluctuation effect of gradient expectation and variance caused by parameter update between consecutive iterations is neglected or confusing by current mainstream gradient optimization algorithms.Using this fluctuation effect, combined with the stratified sampling strategy, this paper designs a novel Memory Stochastic sTratified Gradient Descend(MSTGD) algorithm with an exponential convergence rate. Specifically, MSTGD uses two strategies for variance reduction: the first strategy is to perform variance reduction according to the proportion p of used historical gradient, which is estimated from the mean and variance of sample gradients before and after iteration, and the other strategy is stratified sampling by category. The statistic G̅_mst designed under these two strategies can be adaptively unbiased, and its variance decays at a geometric rate. This enables MSTGD based on G̅_mst to obtain an exponential convergence rate of the form λ^2(k-k_0)(λ∈ (0,1),k is the number of iteration steps,λ is a variable related to proportion p).Unlike most other algorithms that claim to achieve an exponential convergence rate, the convergence rate is independent of parameters such as dataset size N, batch size n, etc., and can be achieved at a constant step size.Theoretical and experimental results show the effectiveness of MSTGD

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2017

A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity

SGD (Stochastic Gradient Descent) is a popular algorithm for large scale...
research
10/07/2021

G̅_mst:An Unbiased Stratified Statistic and a Fast Gradient Optimization Algorithm Based on It

-The fluctuation effect of gradient expectation and variance caused by p...
research
08/07/2018

Fast Variance Reduction Method with Stochastic Batch Size

In this paper we study a family of variance reduction methods with rando...
research
02/19/2018

Matrix Exponential Learning for Resource Allocation with Low Informational Exchange

We consider a distributed resource allocation problem in a multicarrier ...
research
11/05/2015

Stop Wasting My Gradients: Practical SVRG

We present and analyze several strategies for improving the performance ...
research
10/10/2019

One Sample Stochastic Frank-Wolfe

One of the beauties of the projected gradient descent method lies in its...
research
01/28/2023

Unbiased estimators for the Heston model with stochastic interest rates

We combine the unbiased estimators in Rhee and Glynn (Operations Researc...

Please sign up or login with your details

Forgot password? Click here to reset