Adversarial Attacks for Embodied Agents

05/19/2020
by   Aishan Liu, et al.
40

Adversarial attacks are valuable for providing insights into the blind-spots of deep learning models and help improve their robustness. Existing work on adversarial attacks have mainly focused on static scenes; however, it remains unclear whether such attacks are effective against embodied agents, which could navigate and interact with a dynamic environment. In this work, we take the first step to study adversarial attacks for embodied agents. In particular, we generate spatiotemporal perturbations to form 3D adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions. Regarding the temporal dimension, since agents make predictions based on historical observations, we develop a trajectory attention module to explore scene view contributions, which further help localize 3D objects appeared with the highest stimuli. By conciliating with clues from the temporal dimension, along the spatial dimension, we adversarially perturb the physical properties (e.g., texture and 3D shape) of the contextual objects that appeared in the most important scene views. Extensive experiments on the EQA-v1 dataset for several embodied tasks in both the white-box and black-box settings have been conducted, which demonstrate that our perturbations have strong attack and generalization abilities.

READ FULL TEXT

page 2

page 10

page 13

research
07/13/2020

Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes

We focus on the problem of black-box adversarial attacks, where the aim ...
research
11/25/2019

ColorFool: Semantic Adversarial Colorization

Adversarial attacks that generate small L_p-norm perturbations to mislea...
research
08/07/2019

Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations

Adversarial examples contain small perturbations that can remain imperce...
research
12/02/2021

Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems

Adversarial attacks, e.g., adversarial perturbations of the input and ad...
research
06/01/2020

Adversarial Attacks on Classifiers for Eye-based User Modelling

An ever-growing body of work has demonstrated the rich information conte...
research
02/13/2019

The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

We investigate conditions under which test statistics exist that can rel...
research
09/13/2021

Adversarial Bone Length Attack on Action Recognition

Skeleton-based action recognition models have recently been shown to be ...

Please sign up or login with your details

Forgot password? Click here to reset