Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

11/02/2022
by   Jhih-Cing Huang, et al.
0

Recently, quantum classifiers have been known to be vulnerable to adversarial attacks, where quantum classifiers are fooled by imperceptible noises to have misclassification. In this paper, we propose one first theoretical study that utilizing the added quantum random rotation noise can improve the robustness of quantum classifiers against adversarial attacks. We connect the definition of differential privacy and demonstrate the quantum classifier trained with the natural presence of additive noise is differentially private. Lastly, we derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples supported by experimental results.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/10/2023

A unifying framework for differentially private quantum algorithms

Differential privacy is a widely used notion of security that enables th...
12/05/2022

Enhancing Quantum Adversarial Robustness by Randomized Encodings

The interplay between quantum physics and machine learning gives rise to...
03/20/2020

Quantum noise protects quantum classifiers against adversaries

Noise in quantum information processing is often viewed as a disruptive ...
06/22/2023

Towards quantum enhanced adversarial robustness in machine learning

Machine learning algorithms are powerful tools for data driven tasks suc...
12/08/2017

CycleGAN: a Master of Steganography

CycleGAN is one of the latest successful approaches to learn a correspon...
03/31/2018

Adversarial Attacks and Defences Competition

To accelerate research on adversarial examples and robustness of machine...
01/29/2019

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Over the last few years, the phenomenon of adversarial examples --- mali...

Please sign up or login with your details

Forgot password? Click here to reset