Forgetting Data from Pre-trained GANs

06/29/2022
by   Zhifeng Kong, et al.
0

Large pre-trained generative models are known to occasionally provide samples that may be undesirable for various reasons. The standard way to mitigate this is to re-train the models differently. In this work, we take a different, more compute-friendly approach and investigate how to post-edit a model after training so that it forgets certain kinds of samples. We provide three different algorithms for GANs that differ on how the samples to be forgotten are described. Extensive evaluations on real-world image datasets show that our algorithms are capable of forgetting data while retaining high generation quality at a fraction of the cost of full re-training.

READ FULL TEXT

page 7

page 24

page 26

page 27

page 30

research
05/18/2023

Data Redaction from Conditional Generative Models

Deep generative models are known to produce undesirable samples such as ...
research
06/17/2021

Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay

This paper proposes two novel knowledge transfer techniques for class-in...
research
04/25/2023

SAFE: Machine Unlearning With Shard Graphs

We present Synergy Aware Forgetting Ensemble (SAFE), a method to adapt l...
research
04/26/2023

TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation

We propose TR0N, a highly general framework to turn pre-trained uncondit...
research
12/16/2021

An Empirical Investigation of the Role of Pre-training in Lifelong Learning

The lifelong learning paradigm in machine learning is an attractive alte...
research
01/23/2019

Composition and decomposition of GANs

In this work, we propose a composition/decomposition framework for adver...

Please sign up or login with your details

Forgot password? Click here to reset