Separable Self and Mixed Attention Transformers for Efficient Object Tracking

09/07/2023
by   Goutam Yelluru Gopal, et al.
0

The deployment of transformers for visual object tracking has shown state-of-the-art results on several benchmarks. However, the transformer-based models are under-utilized for Siamese lightweight tracking due to the computational complexity of their attention blocks. This paper proposes an efficient self and mixed attention transformer-based architecture for lightweight tracking. The proposed backbone utilizes the separable mixed attention transformers to fuse the template and search regions during feature extraction to generate superior feature encoding. Our prediction head performs global contextual modeling of the encoded features by leveraging efficient self-attention blocks for robust target state estimation. With these contributions, the proposed lightweight tracker deploys a transformer-based backbone and head module concurrently for the first time. Our ablation study testifies to the effectiveness of the proposed combination of backbone and head modules. Simulations show that our Separable Self and Mixed Attention-based Tracker, SMAT, surpasses the performance of related lightweight trackers on GOT10k, TrackingNet, LaSOT, NfS30, UAV123, and AVisT datasets, while running at 37 fps on CPU, 158 fps on GPU, and having 3.8M parameters. For example, it significantly surpasses the closely related trackers E.T.Track and MixFormerV2-S on GOT10k-test by a margin of 7.9 AO metric. The tracker code and model is available at https://github.com/goutamyg/SMAT

READ FULL TEXT

page 3

page 8

research
09/11/2023

Mobile Vision Transformer-based Visual Object Tracking

The introduction of robust backbones, such as Vision Transformers, has i...
research
12/17/2021

Efficient Visual Tracking with Exemplar Transformers

The design of more complex and powerful neural network models has signif...
research
05/25/2023

MixFormerV2: Efficient Fully Transformer Tracking

Transformer-based trackers have achieved strong accuracy on the standard...
research
03/21/2022

MixFormer: End-to-End Tracking with Iterative Mixed Attention

Tracking often uses a multi-stage pipeline of feature extraction, target...
research
03/10/2022

Backbone is All Your Need: A Simplified Architecture for Visual Object Tracking

Exploiting a general-purpose neural architecture to replace hand-wired d...
research
12/15/2021

FEAR: Fast, Efficient, Accurate and Robust Visual Tracker

We present FEAR, a novel, fast, efficient, accurate, and robust Siamese ...
research
08/18/2022

Learning Spatial-Frequency Transformer for Visual Object Tracking

Recent trackers adopt the Transformer to combine or replace the widely u...

Please sign up or login with your details

Forgot password? Click here to reset