Efficient Policy Learning for Non-Stationary MDPs under Adversarial Manipulation

07/22/2019
by   Tiancheng Yu, et al.
4

A Markov Decision Process (MDP) is a popular model for reinforcement learning. However, its commonly used assumption of stationary dynamics and rewards is too stringent and fails to hold in adversarial, nonstationary, or multi-agent problems. We study an episodic setting where the parameters of an MDP can differ across episodes. We learn a reliable policy of this potentially adversarial MDP by developing an Adversarial Reinforcement Learning (ARL) algorithm that reduces our MDP to a sequence of adversarial bandit problems. ARL achieves O(√(SATH^3)) regret, which is optimal with respect to S, A, and T, and its dependence on H is the best (even for the usual stationary MDP) among existing model-free methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

Efficient Learning in Non-Stationary Linear Markov Decision Processes

We study episodic reinforcement learning in non-stationary linear (a.k.a...
research
05/06/2022

Hitting time for Markov decision process

We define the hitting time for a Markov decision process (MDP). We do no...
research
05/17/2020

Optimizing for the Future in Non-Stationary MDPs

Most reinforcement learning methods are based upon the key assumption th...
research
04/20/2020

Data-Driven Learning and Load Ensemble Control

Demand response (DR) programs aim to engage distributed small-scale flex...
research
10/13/2021

Block Contextual MDPs for Continual Learning

In reinforcement learning (RL), when defining a Markov Decision Process ...
research
11/21/2017

Posterior Sampling for Large Scale Reinforcement Learning

Posterior sampling for reinforcement learning (PSRL) is a popular algori...
research
05/03/2023

Human Machine Co-adaption Interface via Cooperation Markov Decision Process System

This paper aims to develop a new human-machine interface to improve reha...

Please sign up or login with your details

Forgot password? Click here to reset