Faithful Knowledge Distillation

06/07/2023
by   Tom A. Lamb, et al.
0

Knowledge distillation (KD) has received much attention due to its success in compressing networks to allow for their deployment in resource-constrained systems. While the problem of adversarial robustness has been studied before in the KD setting, previous works overlook what we term the relative calibration of the student network with respect to its teacher in terms of soft confidences. In particular, we focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples? These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting. To address these questions, we introduce a faithful imitation framework to discuss the relative calibration of confidences, as well as provide empirical and certified methods to evaluate the relative calibration of a student w.r.t. its teacher. Further, to verifiably align the relative calibration incentives of the student to those of its teacher, we introduce faithful distillation. Our experiments on the MNIST and Fashion-MNIST datasets demonstrate the need for such an analysis and the advantages of the increased verifiability of faithful distillation over alternative adversarial distillation methods.

READ FULL TEXT
research
10/31/2021

Rethinking the Knowledge Distillation From the Perspective of Model Calibration

Recent years have witnessed dramatically improvements in the knowledge d...
research
02/22/2023

Distilling Calibrated Student from an Uncalibrated Teacher

Knowledge distillation is a common technique for improving the performan...
research
03/14/2022

On the benefits of knowledge distillation for adversarial robustness

Knowledge distillation is normally used to compress a big network, or te...
research
10/22/2022

Hard Gate Knowledge Distillation – Leverage Calibration for Robust and Reliable Language Model

In knowledge distillation, a student model is trained with supervisions ...
research
10/11/2019

Improving Generalization and Robustness with Noisy Collaboration in Knowledge Distillation

Inspired by trial-to-trial variability in the brain that can result from...
research
10/03/2022

Robust Active Distillation

Distilling knowledge from a large teacher model to a lightweight one is ...
research
10/14/2022

Knowledge Distillation approach towards Melanoma Detection

Melanoma is regarded as the most threatening among all skin cancers. The...

Please sign up or login with your details

Forgot password? Click here to reset