A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic

07/10/2020
by   Mingyi Hong, et al.
0

This paper analyzes a two-timescale stochastic algorithm for a class of bilevel optimization problems with applications such as policy optimization in reinforcement learning, hyperparameter optimization, among others. We consider a case when the inner problem is unconstrained and strongly convex, and the outer problem is either strongly convex, convex or weakly convex. We propose a nonlinear two-timescale stochastic approximation (TTSA) algorithm for tackling the bilevel optimization. In the algorithm, a stochastic (semi)gradient update with a larger step size (faster timescale) is used for the inner problem, while a stochastic mirror descent update with a smaller step size (slower timescale) is used for the outer problem. When the outer problem is strongly convex (resp. weakly convex), the TTSA algorithm finds an 𝒪(K^-1/2)-optimal (resp. 𝒪(K^-2/5)-stationary) solution, where K is the iteration number. To our best knowledge, these are the first convergence rate results for using nonlinear TTSA algorithms on the concerned class of bilevel optimization problems. Lastly, specific to the application of policy optimization, we show that a two-timescale actor-critic proximal policy optimization algorithm can be viewed as a special case of our framework. The actor-critic algorithm converges at 𝒪(K^-1/4) in terms of the gap in objective value to a globally optimal policy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2019

On the Global Convergence of Actor-Critic: A Case for Linear Quadratic Regulator with Ergodic Cost

Despite the empirical success of the actor-critic algorithm, its theoret...
research
06/10/2023

A Single-Loop Deep Actor-Critic Algorithm for Constrained Reinforcement Learning with Provable Convergence

Abstract – Deep Actor-Critic algorithms, which combine Actor-Critic with...
research
10/10/2022

Actor-Critic or Critic-Actor? A Tale of Two Time Scales

We revisit the standard formulation of tabular actor-critic algorithm as...
research
06/02/2021

On the Convergence Rate of Off-Policy Policy Optimization Methods with Density-Ratio Correction

In this paper, we study the convergence properties of off-policy policy ...
research
05/18/2022

A2C is a special case of PPO

Advantage Actor-critic (A2C) and Proximal Policy Optimization (PPO) are ...
research
01/15/2022

Block Policy Mirror Descent

In this paper, we present a new class of policy gradient (PG) methods, n...
research
12/27/2021

Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic

Actor-critic (AC) algorithms, empowered by neural networks, have had sig...

Please sign up or login with your details

Forgot password? Click here to reset