A Fusion-Denoising Attack on InstaHide with Data Augmentation

by   Xinjian Luo, et al.

InstaHide is a state-of-the-art mechanism for protecting private training images in collaborative learning. It works by mixing multiple private images and modifying them in such a way that their visual features are no longer distinguishable to the naked eye, without significantly degrading the accuracy of training. In recent work, however, Carlini et al. show that it is possible to reconstruct private images from the encrypted dataset generated by InstaHide, by exploiting the correlations among the encrypted images. Nevertheless, Carlini et al.'s attack relies on the assumption that each private image is used without modification when mixing up with other private images. As a consequence, it could be easily defeated by incorporating data augmentation into InstaHide. This leads to a natural question: is InstaHide with data augmentation secure? This paper provides a negative answer to the above question, by present an attack for recovering private images from the outputs of InstaHide even when data augmentation is present. The basic idea of our attack is to use a comparative network to identify encrypted images that are likely to correspond to the same private image, and then employ a fusion-denoising network for restoring the private image from the encrypted ones, taking into account the effects of data augmentation. Extensive experiments demonstrate the effectiveness of the proposed attack in comparison to Carlini et al.'s attack.


page 1

page 4

page 7

page 9

page 10

page 12

page 13


InstaHide's Sample Complexity When Mixing Two Private Images

Inspired by InstaHide challenge [Huang, Song, Li and Arora'20], [Chen, S...

Survey: Image Mixing and Deleting for Data Augmentation

Data augmentation has been widely used to improve deep nerual networks p...

Exploring Data Augmentation Methods on Social Media Corpora

Data augmentation has proven widely effective in computer vision. In Nat...

Enhanced Convolutional Neural Tangent Kernels

Recent research shows that for training with ℓ_2 loss, convolutional neu...

Known-plaintext attack and ciphertext-only attack for encrypted single-pixel imaging

In many previous works, a single-pixel imaging (SPI) system is construct...

Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect

The "cold posterior effect" (CPE) in Bayesian deep learning describes th...

A Continued Fraction-Hyperbola based Attack on RSA cryptosystem

In this paper we present new arithmetical and algebraic results followin...

Please sign up or login with your details

Forgot password? Click here to reset