Replica Exchange for Non-Convex Optimization

01/23/2020
by   Jing Dong, et al.
0

Gradient descent (GD) is known to converge quickly for convex objective functions, but it can be trapped at local minimums. On the other hand, Langevin dynamics (LD) can explore the state space and find global minimums, but in order to give accurate estimates, LD needs to run with small discretization stepsize and weak stochastic force, which in general slow down its convergence. This paper shows that these two algorithms can "collaborate" through a simple exchange mechanism, in which they swap their current positions if LD yields a lower objective function. This idea can be seen as the singular limit of the replica exchange technique from the sampling literature. We show that this new algorithm converges to the global minimum linearly with high probability, assuming the objective function is strongly convex in a neighborhood of the unique global minimum. By replacing gradients with stochastic gradients, and adding a proper threshold to the exchange mechanism, our algorithm can also be used in online settings. We further verify our theoretical results through some numerical experiments, and observe superior performance of the proposed algorithm over running GD or LD alone.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2020

Accelerated Algorithms for Convex and Non-Convex Optimization on Manifolds

We propose a general scheme for solving convex and non-convex optimizati...
research
03/06/2019

Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization

In this paper, we present some theoretical work to explain why simple gr...
research
05/01/2020

Distributed Stochastic Non-Convex Optimization: Momentum-Based Variance Reduction

In this work, we propose a distributed algorithm for stochastic non-conv...
research
01/26/2022

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

We introduce a novel framework for optimization based on energy-conservi...
research
02/13/2023

Convergence analysis for a nonlocal gradient descent method via directional Gaussian smoothing

We analyze the convergence of a nonlocal gradient descent method for min...
research
03/08/2023

The Novel Adaptive Fractional Order Gradient Decent Algorithms Design via Robust Control

The vanilla fractional order gradient descent may oscillatively converge...
research
12/19/2018

Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization

Langevin dynamics (LD) has been proven to be a powerful technique for op...

Please sign up or login with your details

Forgot password? Click here to reset