Model-Reuse Attacks on Deep Learning Systems

12/02/2018
by   Yujie Ji, et al.
0

Many of today's machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today's primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 13

page 14

page 15

research
08/25/2017

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

Many of today's machine learning (ML) systems are not built from scratch...
research
08/01/2020

Trojaning Language Models for Fun and Profit

Recent years have witnessed a new paradigm of building natural language ...
research
05/28/2021

Targeted Deep Learning: Framework, Methods, and Applications

Deep learning systems are typically designed to perform for a wide range...
research
08/29/2019

Defending Against Misclassification Attacks in Transfer Learning

Transfer learning accelerates the development of new models (Student Mod...
research
11/18/2020

Privacy Leakage of Real-World Vertical Federated Learning

Federated learning enables mutually distrusting participants to collabor...
research
10/12/2021

On the Security Risks of AutoML

Neural Architecture Search (NAS) represents an emerging machine learning...

Please sign up or login with your details

Forgot password? Click here to reset