Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

11/19/2022
by   Justin Cui, et al.
0

Dataset distillation methods aim to compress a large dataset into a small set of synthetic samples, such that when being trained on, competitive performances can be achieved compared to regular training on the entire dataset. Among recently proposed methods, Matching Training Trajectories (MTT) achieves state-of-the-art performance on CIFAR-10/100, while having difficulty scaling to ImageNet-1k dataset due to the large memory requirement when performing unrolled gradient computation through back-propagation. Surprisingly, we show that there exists a procedure to exactly calculate the gradient of the trajectory matching loss with constant GPU memory requirement (irrelevant to the number of unrolled steps). With this finding, the proposed memory-efficient trajectory matching method can easily scale to ImageNet-1K with 6x memory reduction while introducing only around 2 Further, we find that assigning soft labels for synthetic images is crucial for the performance when scaling to larger number of categories (e.g., 1,000) and propose a novel soft label version of trajectory matching that facilities better aligning of model training trajectories on large datasets. The proposed algorithm not only surpasses previous SOTA on ImageNet-1K under extremely low IPCs (Images Per Class), but also for the first time enables us to scale up to 50 IPCs on ImageNet-1K. Our method (TESLA) achieves 27.9 remarkable +18.2

READ FULL TEXT

page 16

page 17

page 18

page 19

page 20

page 22

page 24

page 25

research
02/28/2023

DREAM: Efficient Dataset Distillation by Representative Matching

Dataset distillation aims to generate small datasets with little informa...
research
11/20/2022

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

Model-based deep learning has achieved astounding successes due in part ...
research
07/30/2022

Delving into Effective Gradient Matching for Dataset Condensation

As deep learning models and datasets rapidly scale up, network training ...
research
06/01/2022

Dataset Distillation using Neural Feature Regression

Dataset distillation aims to learn a small synthetic dataset that preser...
research
05/28/2023

Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection

Data-efficient learning has drawn significant attention, especially give...
research
07/27/2017

A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets

The original ImageNet dataset is a popular large-scale benchmark for tra...
research
06/22/2023

Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective

We present a new dataset condensation framework termed Squeeze, Recover ...

Please sign up or login with your details

Forgot password? Click here to reset