Domain Generalization for Crop Segmentation with Knowledge Distillation

by   Simone Angarano, et al.

In recent years, precision agriculture has gradually oriented farming closer to automation processes to support all the activities related to field management. Service robotics plays a predominant role in this evolution by deploying autonomous agents that can navigate fields while performing tasks without human intervention, such as monitoring, spraying, and harvesting. To execute these precise actions, mobile robots need a real-time perception system that understands their surroundings and identifies their targets in the wild. Generalizing to new crops and environmental conditions is critical for practical applications, as labeled samples are rarely available. In this paper, we investigate the problem of crop segmentation and propose a novel approach to enhance domain generalization using knowledge distillation. In the proposed framework, we transfer knowledge from an ensemble of models individually trained on source domains to a student model that can adapt to unseen target domains. To evaluate the proposed method, we present a synthetic multi-domain dataset for crop segmentation containing plants of variegate shapes and covering different terrain styles, weather conditions, and light scenarios for more than 50,000 samples. We demonstrate significant improvements in performance over state-of-the-art methods. Our approach provides a promising solution for domain generalization in crop segmentation and has the potential to enhance precision agriculture applications.


page 2

page 6

page 9

page 10


Weight Averaging Improves Knowledge Distillation under Domain Shift

Knowledge distillation (KD) is a powerful model compression technique br...

Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN

Distilling knowledge from huge pre-trained networks to improve the perfo...

Self Regulated Learning Mechanism for Data Efficient Knowledge Distillation

Existing methods for distillation use the conventional training approach...

Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation

Knowledge Distillation is becoming one of the primary trends among neura...

Self-Knowledge Distillation: A Simple Way for Better Generalization

The generalization capability of deep neural networks has been substanti...

Multi-teacher knowledge distillation as an effective method for compressing ensembles of neural networks

Deep learning has contributed greatly to many successes in artificial in...

Spirit Distillation: Precise Real-time Prediction with Insufficient Data

Recent trend demonstrates the effectiveness of deep neural networks (DNN...

Please sign up or login with your details

Forgot password? Click here to reset