Privacy-preserving Neural Representations of Text

08/28/2018
by   Maximin Coavoux, et al.
0

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user's device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2022

How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing

Deep learning (DL) models for natural language processing (NLP) tasks of...
research
10/21/2022

Privacy-Preserved Neural Graph Similarity Learning

To develop effective and efficient graph similarity learning (GSL) model...
research
12/31/2019

Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep Models

The huge computation demand of deep learning models and limited computat...
research
08/04/2022

Privacy Safe Representation Learning via Frequency Filtering Encoder

Deep learning models are increasingly deployed in real-world application...
research
12/22/2020

Modeling Deep Learning Based Privacy Attacks on Physical Mail

Mail privacy protection aims to prevent unauthorized access to hidden co...
research
11/22/2019

Adversarial Learning of Privacy-Preserving and Task-Oriented Representations

Data privacy has emerged as an important issue as data-driven deep learn...
research
09/21/2018

Understanding Compressive Adversarial Privacy

Designing a data sharing mechanism without sacrificing too much privacy ...

Please sign up or login with your details

Forgot password? Click here to reset