No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions

by   Tiancheng Jin, et al.

Existing online learning algorithms for adversarial Markov Decision Processes achieve O(√(T)) regret after T rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys O(√(T) + C^) regret where C^ measures how adversarial the transition functions are and can be at most O(T). While this algorithm itself requires knowledge of C^, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in Jin et al. [2021]) and achieves O(U + √(UC^) + C^) regret, where U is some standard gap-dependent coefficient and C^ is the amount of corruption on losses.


page 1

page 2

page 3

page 4


Online Convex Optimization in Adversarial Markov Decision Processes

We consider online learning in episodic loop-free Markov decision proces...

Online Learning in Adversarial MDPs: Is the Communicating Case Harder than Ergodic?

We study online learning in adversarial communicating Markov Decision Pr...

Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition

This work studies the problem of learning episodic Markov Decision Proce...

Best of Both Worlds Policy Optimization

Policy optimization methods are popular reinforcement learning algorithm...

The best of both worlds: stochastic and adversarial episodic MDPs with unknown transition

We consider the best-of-both-worlds problem for learning an episodic Mar...

Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment

This paper studies distributed online learning under Byzantine attacks. ...

Please sign up or login with your details

Forgot password? Click here to reset