One Pass ImageNet

by   Huiyi Hu, et al.

We present the One Pass ImageNet (OPIN) problem, which aims to study the effectiveness of deep learning in a streaming setting. ImageNet is a widely known benchmark dataset that has helped drive and evaluate recent advancements in deep learning. Typically, deep learning methods are trained on static data that the models have random access to, using multiple passes over the dataset with a random shuffle at each epoch of training. Such data access assumption does not hold in many real-world scenarios where massive data is collected from a stream and storing and accessing all the data becomes impractical due to storage costs and privacy concerns. For OPIN, we treat the ImageNet data as arriving sequentially, and there is limited memory budget to store a small subset of the data. We observe that training a deep network in a single pass with the same training settings used for multi-epoch training results in a huge drop in prediction accuracy. We show that the performance gap can be significantly decreased by paying a small memory cost and utilizing techniques developed for continual learning, despite the fact that OPIN differs from typical continual problem settings. We propose using OPIN to study resource-efficient deep learning.


page 1

page 2

page 3

page 4


SIESTA: Efficient Online Continual Learning with Sleep

In supervised continual learning, a deep neural network (DNN) is updated...

Maintaining Plasticity in Deep Continual Learning

Modern deep-learning systems are specialized to problem settings in whic...

One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares

While deep neural networks are capable of achieving state-of-the-art per...

DeepCore: A Comprehensive Library for Coreset Selection in Deep Learning

Coreset selection, which aims to select a subset of the most informative...

Coresets via Bilevel Optimization for Continual Learning and Streaming

Coresets are small data summaries that are sufficient for model training...

Online Learned Continual Compression with Stacked Quantization Module

We introduce and study the problem of Online Continual Compression, wher...

Scalable Recollections for Continual Lifelong Learning

Given the recent success of Deep Learning applied to a variety of single...

Please sign up or login with your details

Forgot password? Click here to reset