DreamTeacher: Pretraining Image Backbones with Deep Generative Models

07/14/2023
by   Daiqing Li, et al.
0

In this work, we introduce a self-supervised feature representation learning framework DreamTeacher that utilizes generative networks for pre-training downstream image backbones. We propose to distill knowledge from a trained generative model into standard image backbones that have been well engineered for specific perception tasks. We investigate two types of knowledge distillation: 1) distilling learned generative features onto target image backbones as an alternative to pretraining these backbones on large labeled datasets such as ImageNet, and 2) distilling labels obtained from generative networks with task heads onto logits of target backbones. We perform extensive analyses on multiple generative models, dense prediction benchmarks, and several pre-training regimes. We empirically find that our DreamTeacher significantly outperforms existing self-supervised representation learning approaches across the board. Unsupervised ImageNet pre-training with DreamTeacher leads to significant improvements over ImageNet classification pre-training on downstream datasets, showcasing generative models, and diffusion generative models specifically, as a promising approach to representation learning on large, diverse datasets without requiring manual annotation.

READ FULL TEXT

page 18

page 19

page 22

page 23

page 24

page 25

page 26

page 27

research
03/09/2023

Rethinking Self-Supervised Visual Representation Learning in Pre-training for 3D Human Pose and Shape Estimation

Recently, a few self-supervised representation learning (SSL) methods ha...
research
11/29/2022

Procedural Image Programs for Representation Learning

Learning image representations using synthetic data allows training neur...
research
04/22/2023

Learning Symbolic Representations Through Joint GEnerative and DIscriminative Training

We introduce GEDI, a Bayesian framework that combines existing self-supe...
research
12/01/2022

Why Are Conditional Generative Models Better Than Unconditional Ones?

Extensive empirical evidence demonstrates that conditional generative mo...
research
10/19/2022

Palm up: Playing in the Latent Manifold for Unsupervised Pretraining

Large and diverse datasets have been the cornerstones of many impressive...
research
01/12/2022

BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations

Annotating images with pixel-wise labels is a time-consuming and costly ...
research
02/17/2023

Self-Supervised Representation Learning from Temporal Ordering of Automated Driving Sequences

Self-supervised feature learning enables perception systems to benefit f...

Please sign up or login with your details

Forgot password? Click here to reset