Random Teachers are Good Teachers

02/23/2023
by   Felix Sarnthein, et al.
0

In this work, we investigate the implicit regularization induced by teacher-student learning dynamics. To isolate its effect, we describe a simple experiment where instead of trained teachers, we consider teachers at random initialization. Surprisingly, when distilling a student into such a random teacher, we observe that the resulting model and its representations already possess very interesting characteristics; (1) we observe a strong improvement of the distilled student over its teacher in terms of probing accuracy. (2) The learnt representations are highly transferable between different tasks but deteriorate strongly if trained on random inputs. (3) The student checkpoint suffices to discover so-called lottery tickets, i.e. it contains identifiable, sparse networks that are as performant as the full network. These observations have interesting consequences for several important areas in machine learning: (1) Self-distillation can work solely based on the implicit regularization present in the gradient dynamics without relying on any dark knowledge, (2) self-supervised learning can learn features even in the absence of data augmentation and (3) SGD already becomes stable when initialized from the student checkpoint with respect to batch orderings. Finally, we shed light on an intriguing local property of the loss landscape: the process of feature learning is strongly amplified if the student is initialized closely to the teacher. This raises interesting questions about the nature of the landscape that have remained unexplored so far.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset