Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes

03/23/2021
by   Ayoub El Hanchi, et al.
0

Reducing the variance of the gradient estimator is known to improve the convergence rate of stochastic gradient-based optimization and sampling algorithms. One way of achieving variance reduction is to design importance sampling strategies. Recently, the problem of designing such schemes was formulated as an online learning problem with bandit feedback, and algorithms with sub-linear static regret were designed. In this work, we build on this framework and propose Avare, a simple and efficient algorithm for adaptive importance sampling for finite-sum optimization and sampling with decreasing step-sizes. Under standard technical conditions, we show that Avare achieves 𝒪(T^2/3) and 𝒪(T^5/6) dynamic regret for SGD and SGLD respectively when run with 𝒪(1/t) step sizes. We achieve this dynamic regret bound by leveraging our knowledge of the dynamics defined by the algorithm, and combining ideas from online learning and variance-reduced stochastic optimization. We validate empirically the performance of our algorithm and identify settings in which it leads to significant improvements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2014

Stochastic Optimization with Importance Sampling

Uniform sampling of training data has been commonly used in traditional ...
research
10/31/2018

On Exploration, Exploitation and Learning in Adaptive Importance Sampling

We study adaptive importance sampling (AIS) as an online learning proble...
research
11/08/2020

Asymptotic Convergence of Thompson Sampling

Thompson sampling has been shown to be an effective policy across a vari...
research
03/29/2019

Online Variance Reduction with Mixtures

Adaptive importance sampling for stochastic optimization is a promising ...
research
03/23/2021

Stochastic Reweighted Gradient Descent

Despite the strong theoretical guarantees that variance-reduced finite-s...
research
02/13/2018

Online Variance Reduction for Stochastic Optimization

Modern stochastic optimization methods often rely on uniform sampling wh...
research
02/09/2023

The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning

Suppose we are given access to n independent samples from distribution μ...

Please sign up or login with your details

Forgot password? Click here to reset