A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation

by   Jinjing Zhu, et al.

In this paper, we strive to answer the question "how to collaboratively learn convolutional neural network (CNN)-based and vision transformer (ViT)-based models by selecting and exchanging the reliable knowledge between them for semantic segmentation?" Accordingly, we propose an online knowledge distillation (KD) framework that can simultaneously learn compact yet effective CNN-based and ViT-based models with two key technical breakthroughs to take full advantage of CNNs and ViT while compensating their limitations. Firstly, we propose heterogeneous feature distillation (HFD) to improve students' consistency in low-layer feature space by mimicking heterogeneous features between CNNs and ViT. Secondly, to facilitate the two students to learn reliable knowledge from each other, we propose bidirectional selective distillation (BSD) that can dynamically transfer selective knowledge. This is achieved by 1) region-wise BSD determining the directions of knowledge transferred between the corresponding regions in the feature space and 2) pixel-wise BSD discerning which of the prediction knowledge to be transferred in the logit space. Extensive experiments on three benchmark datasets demonstrate that our proposed framework outperforms the state-of-the-art online distillation methods by a large margin, and shows its efficacy in learning collaboratively between ViT-based and CNN-based models.


page 1

page 6

page 16


Heterogeneous Generative Knowledge Distillation with Masked Image Modeling

Small CNN-based models usually require transferring knowledge from a lar...

Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the Best of Both Students

The popular methods for semi-supervised semantic segmentation mostly ado...

Structured Knowledge Distillation for Semantic Segmentation

In this paper, we investigate the knowledge distillation strategy for tr...

Compressing Facial Makeup Transfer Networks by Collaborative Distillation and Kernel Decomposition

Although the facial makeup transfer network has achieved high-quality pe...

PILOT: A Pixel Intensity Driven Illuminant Color Estimation Framework for Color Constancy

In this study, a CNN based Pixel Intensity driven iLluminant cOlor esTim...

MUSE: Feature Self-Distillation with Mutual Information and Self-Information

We present a novel information-theoretic approach to introduce dependenc...

Structural Knowledge Distillation for Object Detection

Knowledge Distillation (KD) is a well-known training paradigm in deep ne...

Please sign up or login with your details

Forgot password? Click here to reset