Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels

02/11/2023
by   Zifu Wang, et al.
0

IoU losses are surrogates that directly optimize the Jaccard index. In semantic segmentation, IoU losses are shown to perform better with respect to the Jaccard index measure than pixel-wise losses such as the cross-entropy loss. The most notable IoU losses are the soft Jaccard loss and the Lovasz-Softmax loss. However, these losses are incompatible with soft labels which are ubiquitous in machine learning. In this paper, we propose Jaccard metric losses (JMLs), which are variants of the soft Jaccard loss, and are compatible with soft labels. With JMLs, we study two of the most popular use cases of soft labels: label smoothing and knowledge distillation. With a variety of architectures, our experiments show significant improvements over the cross-entropy loss on three semantic segmentation datasets (Cityscapes, PASCAL VOC and DeepGlobe Land), and our simple approach outperforms state-of-the-art knowledge distillation methods by a large margin. Our source code is available at: \href{https://github.com/zifuwanggg/JDML}{https://github.com/zifuwanggg/JDML}.

READ FULL TEXT

page 19

page 20

research
03/28/2023

Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels

The soft Dice loss (SDL) has taken a pivotal role in many automated segm...
research
05/23/2023

Decoupled Kullback-Leibler Divergence Loss

In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence...
research
11/13/2019

Location-aware Upsampling for Semantic Segmentation

Many successful learning targets such as dice loss and cross-entropy los...
research
05/24/2017

Optimization of the Jaccard index for image segmentation with the Lovász hinge

The Jaccard loss, commonly referred to as the intersection-over-union lo...
research
12/06/2017

Beyond the Pixel-Wise Loss for Topology-Aware Delineation

Delineation of curvilinear structures is an important problem in Compute...
research
11/30/2021

The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

In spite of the dominant performances of deep neural networks, recent wo...
research
12/02/2022

Avoiding spurious correlations via logit correction

Empirical studies suggest that machine learning models trained with empi...

Please sign up or login with your details

Forgot password? Click here to reset