𝐂^2Former: Calibrated and Complementary Transformer for RGB-Infrared Object Detection

by   Maoxun Yuan, et al.

Object detection on visible (RGB) and infrared (IR) images, as an emerging solution to facilitate robust detection for around-the-clock applications, has received extensive attention in recent years. With the help of IR images, object detectors have been more reliable and robust in practical applications by using RGB-IR combined information. However, existing methods still suffer from modality miscalibration and fusion imprecision problems. Since transformer has the powerful capability to model the pairwise correlations between different features, in this paper, we propose a novel Calibrated and Complementary Transformer called C^2Former to address these two problems simultaneously. In C^2Former, we design an Inter-modality Cross-Attention (ICA) module to obtain the calibrated and complementary features by learning the cross-attention relationship between the RGB and IR modality. To reduce the computational cost caused by computing the global attention in ICA, an Adaptive Feature Sampling (AFS) module is introduced to decrease the dimension of feature maps. Because C^2Former performs in the feature domain, it can be embedded into existed RGB-IR object detectors via the backbone network. Thus, one single-stage and one two-stage object detector both incorporating our C^2Former are constructed to evaluate its effectiveness and versatility. With extensive experiments on the DroneVehicle and KAIST RGB-IR datasets, we verify that our method can fully utilize the RGB-IR complementary information and achieve robust detection results. The code is available at https://github.com/yuanmaoxun/Calibrated-and-Complementary-Transformer-for-RGB-Infrared-Object-Detection.git.


page 1

page 2

page 4

page 5

page 7

page 9

page 10


Translation, Scale and Rotation: Cross-Modal Alignment Meets RGB-Infrared Vehicle Detection

Integrating multispectral data in object detection, especially visible a...

Cross-Modality Attentive Feature Fusion for Object Detection in Multispectral Remote Sensing Imagery

Cross-modality fusing complementary information of multispectral remote ...

Drone Object Detection Using RGB/IR Fusion

Object detection using aerial drone imagery has received a great deal of...

Point-aware Interaction and CNN-induced Refinement Network for RGB-D Salient Object Detection

By integrating complementary information from RGB image and depth map, t...

JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection

This paper proposes a novel joint learning and densely-cooperative fusio...

ARM3D: Attention-based relation module for indoor 3D object detection

Relation context has been proved to be useful for many challenging visio...

HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection

The High-Resolution Transformer (HRFormer) can maintain high-resolution ...

Please sign up or login with your details

Forgot password? Click here to reset