Dataset Condensation with Differentiable Siamese Augmentation

02/16/2021
by   Bo Zhao, et al.
5

In many machine learning problems, large-scale datasets have become the de-facto standard to train state-of-the-art deep networks at the price of heavy computation load. In this paper, we focus on condensing large training sets into significantly smaller synthetic sets which can be used to train deep neural networks from scratch with minimum drop in performance. Inspired from the recent training set synthesis methods, we propose Differentiable Siamese Augmentation that enables effective use of data augmentation to synthesize more informative synthetic images and thus achieves better performance when training networks with augmentations. Experiments on multiple image classification benchmarks demonstrate that the proposed method obtains substantial gains over the state-of-the-art, 7 with only less than 1 relative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset