MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving

08/27/2021
by   Kai Chen, et al.
25

Autonomous driving has attracted much attention over the years but turns out to be harder than expected, probably due to the difficulty of labeled data collection for model training. Self-supervised learning (SSL), which leverages unlabeled data only for representation learning, might be a promising way to improve model performance. Existing SSL methods, however, usually rely on the single-centric-object guarantee, which may not be applicable for multi-instance datasets such as street scenes. To alleviate this limitation, we raise two issues to solve: (1) how to define positive samples for cross-view consistency and (2) how to measure similarity in multi-instance circumstances. We first adopt an IoU threshold during random cropping to transfer global-inconsistency to local-consistency. Then, we propose two feature alignment methods to enable 2D feature maps for multi-instance similarity measurement. Additionally, we adopt intra-image clustering with self-attention for further mining intra-image similarity and translation-invariance. Experiments show that, when pre-trained on Waymo dataset, our method called Multi-instance Siamese Network (MultiSiam) remarkably improves generalization ability and achieves state-of-the-art transfer performance on autonomous driving benchmarks, including Cityscapes and BDD100K, while existing SSL counterparts like MoCo, MoCo-v2, and BYOL show significant performance drop. By pre-training on SODA10M, a large-scale autonomous driving dataset, MultiSiam exceeds the ImageNet pre-trained MoCo-v2, demonstrating the potential of domain-specific pre-training. Code will be available at https://github.com/KaiChen1998/MultiSiam.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 8

research
03/14/2022

UniVIP: A Unified Framework for Self-Supervised Visual Pre-training

Self-supervised learning (SSL) holds promise in leveraging large amounts...
research
11/27/2020

Self-EMD: Self-Supervised Object Detection without ImageNet

In this paper, we propose a novel self-supervised representation learnin...
research
01/03/2023

Policy Pre-training for End-to-end Autonomous Driving via Self-supervised Geometric Modeling

Witnessing the impressive achievements of pre-training techniques on lar...
research
06/21/2021

SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous Driving

Aiming at facilitating a real-world, ever-evolving and scalable autonomo...
research
03/30/2021

Large Scale Autonomous Driving Scenarios Clustering with Self-supervised Feature Extraction

The clustering of autonomous driving scenario data can substantially ben...
research
02/13/2022

ET-BERT: A Contextualized Datagram Representation with Pre-training Transformers for Encrypted Traffic Classification

Encrypted traffic classification requires discriminative and robust traf...
research
03/17/2023

Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding

Predicting attention regions of interest is an important yet challenging...

Please sign up or login with your details

Forgot password? Click here to reset