VENOMAVE: Clean-Label Poisoning Against Speech Recognition

10/21/2020
by   Hojjat Aghakhani, et al.
0

In the past few years, we observed a wide adoption of practical systems that use Automatic Speech Recognition (ASR) systems to improve human-machine interaction. Modern ASR systems are based on neural networks and prior research demonstrated that these systems are susceptible to adversarial examples, i.e., malicious audio inputs that lead to misclassification by the victim's network during the system's run time. The research question if ASR systems are also vulnerable to data poisoning attacks is still unanswered. In such an attack, a manipulation happens during the training phase of the neural network: an adversary injects malicious inputs into the training set such that the neural network's integrity and performance are compromised. In this paper, we present the first data poisoning attack in the audio domain, called VENOMAVE. Prior work in the image domain demonstrated several types of data poisoning attacks, but they cannot be applied to the audio domain. The main challenge is that we need to attack a time series of inputs. To enforce a targeted misclassification in an ASR system, we need to carefully generate a specific sequence of disturbed inputs for the target utterance, which will eventually be decoded to the desired sequence of words. More specifically, the adversarial goal is to produce a series of misclassification tasks and in each of them, we need to poison the system to misrecognize each frame of the target file. To demonstrate the practical feasibility of our attack, we evaluate VENOMAVE on an ASR system that detects sequences of digits from 0 to 9. When poisoning only 0.94 dataset on average, we achieve an attack success rate of 83.33 that data poisoning attacks against ASR systems represent a real threat that needs to be considered.

READ FULL TEXT
research
12/14/2021

Robustifying automatic speech recognition by extracting slowly varying features

In the past few years, it has been shown that deep learning systems are ...
research
05/26/2023

Leveraging characteristics of the output probability distribution for identifying adversarial audio examples

Adversarial attacks represent a security threat to machine learning base...
research
10/25/2021

Beyond L_p clipping: Equalization-based Psychoacoustic Attacks against ASRs

Automatic Speech Recognition (ASR) systems convert speech into text and ...
research
05/24/2020

Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

Machine learning systems and also, specifically, automatic speech recogn...
research
08/05/2019

Robust Over-the-Air Adversarial Examples Against Automatic Speech Recognition Systems

Automatic speech recognition (ASR) systems are possible to fool via targ...
research
03/19/2021

SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition Systems

With the wide use of Automatic Speech Recognition (ASR) in applications ...
research
08/02/2023

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time

Automatic speech recognition (ASR) systems have been shown to be vulnera...

Please sign up or login with your details

Forgot password? Click here to reset