Rényi Divergence Deep Mutual Learning

09/13/2022
by   Weipeng Huang, et al.
0

This paper revisits an incredibly simple yet exceedingly effective computing paradigm, Deep Mutual Learning (DML). We observe that the effectiveness correlates highly to its excellent generalization quality. In the paper, we interpret the performance improvement with DML from a novel perspective that it is roughly an approximate Bayesian posterior sampling procedure. This also establishes the foundation for applying the Rényi divergence to improve the original DML, as it brings in the variance control of the prior (in the context of DML). Therefore, we propose Rényi Divergence Deep Mutual Learning (RDML). Our empirical results represent the advantage of the marriage of DML and the Rényi divergence. The flexible control imposed by the Rényi divergence is able to further improve DML to learn better generalized models.

READ FULL TEXT

page 11

page 12

research
08/19/2018

Generalized Bregman and Jensen divergences which include some f-divergences

In this paper, we introduce new classes of divergences by extending the ...
research
09/26/2014

Generalized Twin Gaussian Processes using Sharma-Mittal Divergence

There has been a growing interest in mutual information measures due to ...
research
03/16/2021

Modulating Localization and Classification for Harmonized Object Detection

Object detection involves two sub-tasks, i.e. localizing objects in an i...
research
10/28/2019

Locality-Sensitive Hashing for f-Divergences: Mutual Information Loss and Beyond

Computing approximate nearest neighbors in high dimensional spaces is a ...
research
03/21/2019

Towards Characterizing Divergence in Deep Q-Learning

Deep Q-Learning (DQL), a family of temporal difference algorithms for co...
research
09/26/2019

The f-Divergence Expectation Iteration Scheme

This paper introduces the f-EI(ϕ) algorithm, a novel iterative algorithm...

Please sign up or login with your details

Forgot password? Click here to reset