Online learning with dynamics: A minimax perspective

12/03/2020
by   Kush Bhatia, et al.
0

We study the problem of online learning with dynamics, where a learner interacts with a stateful environment over multiple rounds. In each round of the interaction, the learner selects a policy to deploy and incurs a cost that depends on both the chosen policy and current state of the world. The state-evolution dynamics and the costs are allowed to be time-varying, in a possibly adversarial way. In this setting, we study the problem of minimizing policy regret and provide non-constructive upper bounds on the minimax rate for the problem. Our main results provide sufficient conditions for online learnability for this setup with corresponding rates. The rates are characterized by 1) a complexity term capturing the expressiveness of the underlying policy class under the dynamics of state change, and 2) a dynamics stability term measuring the deviation of the instantaneous loss from a certain counterfactual loss. Further, we provide matching lower bounds which show that both the complexity terms are indeed necessary. Our approach provides a unifying analysis that recovers regret bounds for several well studied problems including online learning with memory, online control of linear quadratic regulators, online Markov decision processes, and tracking adversarial targets. In addition, we show how our tools help obtain tight regret bounds for a new problems (with non-linear dynamics and non-convex losses) for which such bounds were not known prior to our work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2017

Fast rates for online learning in Linearly Solvable Markov Decision Processes

We study the problem of online learning in a class of Markov decision pr...
research
10/18/2022

Online Convex Optimization with Unbounded Memory

Online convex optimization (OCO) is a widely used framework in online le...
research
03/06/2023

Accelerated Rates between Stochastic and Adversarial Online Convex Optimization

Stochastic and adversarial data are two widely studied settings in onlin...
research
02/15/2022

Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness

Stochastic and adversarial data are two widely studied settings in onlin...
research
10/15/2021

k – Online Policies and Fundamental Limits

This paper introduces and studies the k problem – a generalization of th...
research
06/19/2018

Online Linear Quadratic Control

We study the problem of controlling linear time-invariant systems with k...
research
05/27/2022

Learning to Control Linear Systems can be Hard

In this paper, we study the statistical difficulty of learning to contro...

Please sign up or login with your details

Forgot password? Click here to reset