Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

11/20/2022
by   Jiawei Du, et al.
0

Model-based deep learning has achieved astounding successes due in part to the availability of large-scale realworld data. However, processing such massive amounts of data comes at a considerable cost in terms of computations, storage, training and the search for good neural architectures. Dataset distillation has thus recently come to the fore. This paradigm involves distilling information from large real-world datasets into tiny and compact synthetic datasets such that processing the latter yields similar performances as the former. State-of-the-art methods primarily rely on learning the synthetic dataset by matching the gradients obtained during training between the real and synthetic data. However, these gradient-matching methods suffer from the accumulated trajectory error caused by the discrepancy between the distillation and subsequent evaluation. To alleviate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory. We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory. Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7 dataset with higher resolution images. We also validate the effectiveness and generalizability of our method with datasets of different resolutions and demonstrate its applicability to neural architecture search.

READ FULL TEXT

page 8

page 14

page 15

research
11/19/2022

Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

Dataset distillation methods aim to compress a large dataset into a smal...
research
05/30/2022

Dataset Condensation via Efficient Synthetic-Data Parameterization

The great success of machine learning with massive amounts of data comes...
research
03/03/2022

CAFE: Learning to Condense Dataset by Aligning Features

Dataset condensation aims at reducing the network training effort throug...
research
02/28/2023

DREAM: Efficient Dataset Distillation by Representative Matching

Dataset distillation aims to generate small datasets with little informa...
research
03/22/2022

Dataset Distillation by Matching Training Trajectories

Dataset distillation is the task of synthesizing a small dataset such th...
research
03/08/2023

DiM: Distilling Dataset into Generative Model

Dataset distillation reduces the network training cost by synthesizing s...
research
06/15/2020

Flexible Dataset Distillation: Learn Labels Instead of Images

We study the problem of dataset distillation - creating a small set of s...

Please sign up or login with your details

Forgot password? Click here to reset