Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing

09/21/2020
by   Maurice Weber, et al.
17

Quantum machine learning models have the potential to offer speedups and better predictive accuracy compared to their classical counterparts. However, these quantum algorithms, like their classical counterparts, have been shown to also be vulnerable to input perturbations, in particular for classification problems. These can arise either from noisy implementations or, as a worst-case type of noise, adversarial attacks. These attacks can undermine both the reliability and security of quantum classification algorithms. In order to develop defence mechanisms and to better understand the reliability of these algorithms, it is crucial to understand their robustness properties in presence of both natural noise sources and adversarial manipulation. From the observation that, unlike in the classical setting, measurements involved in quantum classification algorithms are naturally probabilistic, we uncover and formalize a fundamental link between binary quantum hypothesis testing (QHT) and provably robust quantum classification. Then from the optimality of QHT, we prove a robustness condition, which is tight under modest assumptions, and enables us to develop a protocol to certify robustness. Since this robustness condition is a guarantee against the worst-case noise scenarios, our result naturally extends to scenarios in which the noise source is known. Thus we also provide a framework to study the reliability of quantum classification protocols under more general settings.

READ FULL TEXT
research
11/16/2020

Adversarially Robust Classification based on GLRT

Machine learning models are vulnerable to adversarial attacks that can o...
research
12/17/2021

Provable Adversarial Robustness in the Quantum Model

Modern machine learning systems have been applied successfully to a vari...
research
12/04/2021

Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing

Machine learning models are known to be susceptible to adversarial attac...
research
03/03/2020

Robust data encodings for quantum classifiers

Data representation is crucial for the success of machine learning model...
research
03/20/2020

Quantum noise protects quantum classifiers against adversaries

Noise in quantum information processing is often viewed as a disruptive ...
research
10/10/2020

Noise in Classification

This chapter considers the computational and statistical aspects of lear...
research
12/02/2022

Finitely Repeated Adversarial Quantum Hypothesis Testing

We formulate a passive quantum detector based on a quantum hypothesis te...

Please sign up or login with your details

Forgot password? Click here to reset