Knowledge Adaptation for Efficient Semantic Segmentation

03/12/2019
by   Tong He, et al.
12

Both accuracy and efficiency are of significant importance to the task of semantic segmentation. Existing deep FCNs suffer from heavy computations due to a series of high-resolution feature maps for preserving the detailed knowledge in dense estimation. Although reducing the feature map resolution (i.e., applying a large overall stride) via subsampling operations (e.g., pooling and convolution striding) can instantly increase the efficiency, it dramatically decreases the estimation accuracy. To tackle this dilemma, we propose a knowledge distillation method tailored for semantic segmentation to improve the performance of the compact FCNs with large overall stride. To handle the inconsistency between the features of the student and teacher network, we optimize the feature similarity in a transferred latent domain formulated by utilizing a pre-trained autoencoder. Moreover, an affinity distillation module is proposed to capture the long-range dependency by calculating the non-local interactions across the whole image. To validate the effectiveness of our proposed method, extensive experiments have been conducted on three popular benchmarks: Pascal VOC, Cityscapes and Pascal Context. Built upon a highly competitive baseline, our proposed method can improve the performance of a student network by 2.5% (mIOU boosts from 70.2 to 72.7 on the cityscapes test set) and can train a better compact model with only 8% float operations (FLOPS) of a model that achieves comparable performances.

READ FULL TEXT

page 4

page 5

page 7

page 9

page 10

page 11

research
05/07/2022

Distilling Inter-Class Distance for Semantic Segmentation

Knowledge distillation is widely adopted in semantic segmentation to red...
research
08/08/2023

AICSD: Adaptive Inter-Class Similarity Distillation for Semantic Segmentation

In recent years, deep neural networks have achieved remarkable accuracy ...
research
11/02/2022

LightVessel: Exploring Lightweight Coronary Artery Vessel Segmentation via Similarity Knowledge Distillation

In recent years, deep convolution neural networks (DCNNs) have achieved ...
research
03/23/2023

A Simple and Generic Framework for Feature Distillation via Channel-wise Transformation

Knowledge distillation is a popular technique for transferring the knowl...
research
08/27/2021

CoCo DistillNet: a Cross-layer Correlation Distillation Network for Pathological Gastric Cancer Segmentation

In recent years, deep convolutional neural networks have made significan...
research
05/06/2023

Structural and Statistical Texture Knowledge Distillation for Semantic Segmentation

Existing knowledge distillation works for semantic segmentation mainly f...
research
07/19/2021

Double Similarity Distillation for Semantic Image Segmentation

The balance between high accuracy and high speed has always been a chall...

Please sign up or login with your details

Forgot password? Click here to reset