Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions

06/09/2023
by   Ezgi Korkmaz, et al.
0

Learning in MDPs with highly complex state representations is currently possible due to multiple advancements in reinforcement learning algorithm design. However, this incline in complexity, and furthermore the increase in the dimensions of the observation came at the cost of volatility that can be taken advantage of via adversarial attacks (i.e. moving along worst-case directions in the observation space). To solve this policy instability problem we propose a novel method to detect the presence of these non-robust directions via local quadratic approximation of the deep neural policy loss. Our method provides a theoretical basis for the fundamental cut-off between safe observations and adversarial observations. Furthermore, our technique is computationally efficient, and does not depend on the methods used to produce the worst-case directions. We conduct extensive experiments in the Arcade Learning Environment with several different adversarial attack techniques. Most significantly, we demonstrate the effectiveness of our approach even in the setting where non-robust directions are explicitly optimized to circumvent our proposed method.

READ FULL TEXT
research
01/17/2023

Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness

Learning from raw high dimensional data via interaction with a given env...
research
06/14/2022

Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising

Neural network policies trained using Deep Reinforcement Learning (DRL) ...
research
12/16/2021

Deep Reinforcement Learning Policies Learn Shared Adversarial Features Across MDPs

The use of deep neural networks as function approximators has led to str...
research
10/12/2022

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning

Recent studies reveal that a well-trained deep reinforcement learning (R...
research
06/09/2022

Towards Safe Reinforcement Learning via Constraining Conditional Value-at-Risk

Though deep reinforcement learning (DRL) has obtained substantial succes...
research
09/14/2022

Robust Constrained Reinforcement Learning

Constrained reinforcement learning is to maximize the expected reward su...
research
09/10/2020

Second Order Optimization for Adversarial Robustness and Interpretability

Deep neural networks are easily fooled by small perturbations known as a...

Please sign up or login with your details

Forgot password? Click here to reset