Dompteur: Taming Audio Adversarial Examples

by   Thorsten Eisenhofer, et al.

Adversarial examples seem to be inevitable. These specifically crafted inputs allow attackers to arbitrarily manipulate machine learning systems. Even worse, they often seem harmless to human observers. In our digital society, this poses a significant threat. For example, Automatic Speech Recognition (ASR) systems, which serve as hands-free interfaces to many kinds of systems, can be attacked with inputs incomprehensible for human listeners. The research community has unsuccessfully tried several approaches to tackle this problem. In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners. By applying the principles of psychoacoustics, we can remove semantically irrelevant information from the ASR input and train a model that resembles human perception more closely. We implement our idea in a tool named Dompteur and demonstrate that our augmented system, in contrast to an unmodified baseline, successfully focuses on perceptible ranges of the input signal. This change forces adversarial examples into the audible range, while using minimal computational overhead and preserving benign performance. To evaluate our approach, we construct an adaptive attacker, which actively tries to avoid our augmentations and demonstrate that adversarial examples from this attacker remain clearly perceivable. Finally, we substantiate our claims by performing a hearing test with crowd-sourced human listeners.


page 6

page 9


A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples

Adversarial examples (AEs) are crafted by adding human-imperceptible per...

Robust Over-the-Air Adversarial Examples Against Automatic Speech Recognition Systems

Automatic speech recognition (ASR) systems are possible to fool via targ...

Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems

Automatic speech recognition (ASR) systems are possible to fool via targ...

Characterizing Audio Adversarial Examples Using Temporal Dependency

Recent studies have highlighted adversarial examples as a ubiquitous thr...

Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

Machine learning systems and also, specifically, automatic speech recogn...

There is more than one kind of robustness: Fooling Whisper with adversarial examples

Whisper is a recent Automatic Speech Recognition (ASR) model displaying ...

Bad Characters: Imperceptible NLP Attacks

Several years of research have shown that machine-learning systems are v...

Please sign up or login with your details

Forgot password? Click here to reset