Spirit Distillation: Precise Real-time Prediction with Insufficient Data

03/25/2021
by   Zhiyuan Wu, et al.
0

Recent trend demonstrates the effectiveness of deep neural networks (DNNs) apply on the task of environment perception in autonomous driving system. While large-scale and complete data can train out fine DNNs, collecting it is always difficult, expensive, and time-consuming. Also, the significance of both accuracy and efficiency cannot be over-emphasized due to the requirement of real-time recognition. To alleviate the conflicts between weak data and high computational consumption of DNNs, we propose a new training framework named Spirit Distillation(SD). It extends the ideas of fine-tuning-based transfer learning(FTT) and feature-based knowledge distillation. By allowing the student to mimic its teacher in feature extraction, the gap of general features between the teacher-student networks is bridged. The Image Party distillation enhancement method(IP) is also proposed, which shuffling images from various domains, and randomly selecting a few as mini-batch. With this approach, the overfitting that the student network to the general features of the teacher network can be easily avoided. Persuasive experiments and discussions are conducted on CityScapes with the prompt of COCO2017 and KITTI. Results demonstrate the boosting performance in segmentation(mIOU and high-precision accuracy boost by 1.4 can gain a precise compact network with only 41.8% FLOPs(see Fig. 1). This paper is a pioneering work on knowledge distillation applied to few-shot learning. The proposed methods significantly reduce the dependence on data of DNNs training, and improves the robustness of DNNs when facing rare situations, with real-time requirement satisfied. We provide important technical support for the advancement of scene perception technology for autonomous driving.

READ FULL TEXT

page 1

page 3

page 7

research
08/20/2022

Effectiveness of Function Matching in Driving Scene Recognition

Knowledge distillation is an effective approach for training compact rec...
research
04/29/2021

Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer

Recent applications pose requirements of both cross-domain knowledge tra...
research
10/28/2022

Teacher-Student Architecture for Knowledge Learning: A Survey

Although Deep Neural Networks (DNNs) have shown a strong capacity to sol...
research
10/26/2020

Activation Map Adaptation for Effective Knowledge Distillation

Model compression becomes a recent trend due to the requirement of deplo...
research
02/23/2023

Practical Knowledge Distillation: Using DNNs to Beat DNNs

For tabular data sets, we explore data and model distillation, as well a...
research
03/13/2023

Continuous sign language recognition based on cross-resolution knowledge distillation

The goal of continuous sign language recognition(CSLR) research is to ap...
research
04/03/2023

Domain Generalization for Crop Segmentation with Knowledge Distillation

In recent years, precision agriculture has gradually oriented farming cl...

Please sign up or login with your details

Forgot password? Click here to reset