Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN

by   Jiawei Yao, et al.
Shanghai Jiao Tong University
University of Washington

Monocular depth estimation is an ongoing challenge in computer vision. Recent progress with Transformer models has demonstrated notable advantages over conventional CNNs in this area. However, there's still a gap in understanding how these models prioritize different regions in 2D images and how these regions affect depth estimation performance. To explore the differences between Transformers and CNNs, we employ a sparse pixel approach to contrastively analyze the distinctions between the two. Our findings suggest that while Transformers excel in handling global context and intricate textures, they lag behind CNNs in preserving depth gradient continuity. To further enhance the performance of Transformer models in monocular depth estimation, we propose the Depth Gradient Refinement (DGR) module that refines depth estimation through high-order differentiation, feature fusion, and recalibration. Additionally, we leverage optimal transport theory, treating depth maps as spatial probability distributions, and employ the optimal transport distance as a loss function to optimize our model. Experimental results demonstrate that models integrated with the plug-and-play Depth Gradient Refinement (DGR) module and the proposed loss function enhance performance without increasing complexity and computational costs. This research not only offers fresh insights into the distinctions between Transformers and CNNs in depth estimation but also paves the way for novel depth estimation methodologies.


page 1

page 3

page 6


Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics

The advent of autonomous driving and advanced driver assistance systems ...

Event-based Monocular Dense Depth Estimation with Recurrent Transformers

Event cameras, offering high temporal resolutions and high dynamic range...

MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation

We release MiDaS v3.1 for monocular depth estimation, offering a variety...

MonoFormer: Towards Generalization of self-supervised monocular depth estimation with Transformers

Self-supervised monocular depth estimation has been widely studied recen...

A Study on the Generality of Neural Network Structures for Monocular Depth Estimation

Monocular depth estimation has been widely studied, and significant impr...

Frequency-Aware Self-Supervised Monocular Depth Estimation

We present two versatile methods to generally enhance self-supervised mo...

Monocular Depth Estimation with Sharp Boundary

Monocular depth estimation is the base task in computer vision. It has a...

Please sign up or login with your details

Forgot password? Click here to reset