Self-Supervised Pretraining and Controlled Augmentation Improve Rare Wildlife Recognition in UAV Images

08/17/2021
by   Xiaochen Zheng, et al.
0

Automated animal censuses with aerial imagery are a vital ingredient towards wildlife conservation. Recent models are generally based on deep learning and thus require vast amounts of training data. Due to their scarcity and minuscule size, annotating animals in aerial imagery is a highly tedious process. In this project, we present a methodology to reduce the amount of required training data by resorting to self-supervised pretraining. In detail, we examine a combination of recent contrastive learning methodologies like Momentum Contrast (MoCo) and Cross-Level Instance-Group Discrimination (CLD) to condition our model on the aerial images without the requirement for labels. We show that a combination of MoCo, CLD, and geometric augmentations outperforms conventional models pre-trained on ImageNet by a large margin. Crucially, our method still yields favorable results even if we reduce the number of training animals to just 10 baseline at similar precision. This effectively allows reducing the number of required annotations to a fraction while still being able to train high-accuracy models in such highly challenging settings.

READ FULL TEXT

page 1

page 6

page 7

research
07/29/2019

Self-Supervised Learning for Stereo Reconstruction on Aerial Images

Recent developments established deep learning as an inevitable tool to b...
research
08/20/2021

Self-supervised learning for joint SAR and multispectral land cover classification

Self-supervised learning techniques are gaining popularity due to their ...
research
10/21/2022

Self-Supervised Pretraining on Satellite Imagery: a Case Study on Label-Efficient Vehicle Detection

In defense-related remote sensing applications, such as vehicle detectio...
research
07/01/2022

Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition

Existing text recognition methods usually need large-scale training data...
research
08/24/2022

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

Self supervised contrastive learning based pretraining allows developmen...
research
11/15/2021

Joint Unsupervised and Supervised Training for Multilingual ASR

Self-supervised training has shown promising gains in pretraining models...

Please sign up or login with your details

Forgot password? Click here to reset