Data-free knowledge distillation (DFKD) aims to obtain a lightweight stu...
Multi-Teacher knowledge distillation provides students with additional
s...
Recent years have witnessed significant progress in developing efficient...
Although diffusion model has shown great potential for generating higher...
Graph neural networks (GNNs) have become one of the most popular researc...
Considerable progress has been made in domain generalization (DG) which ...
Knowledge graph embedding (KGE) has been intensively investigated for li...
Existing knowledge distillation methods on graph neural networks (GNNs) ...
Knowledge distillation aims to compress a powerful yet cumbersome teache...
Knowledge distillation aims to enhance the performance of a lightweight
...
Knowledge distillation is initially introduced to utilize additional
sup...
Knowledge distillation has recently become a popular technique to improv...
Domain generalization (DG) aims to learn a generalizable model from mult...
Domain generalization (DG) utilizes multiple labeled source datasets to ...
Knowledge distillation is a generalized logits matching technique for mo...
Knowledge Distillation (KD) aims at transferring knowledge from a larger...
Knowledge distillation is an effective method to transfer the knowledge ...
Distillation is an effective knowledge-transfer technique that uses pred...
Given the intractability of large scale HIN, network embedding which lea...