Self-Supervised Learning of Depth and Ego-motion with Differentiable Bundle Adjustment

09/28/2019
by   Yunxiao Shi, et al.
22

Learning to predict scene depth and camera motion from RGB inputs only is a challenging task. Most existing learning based methods deal with this task in a supervised manner which require ground-truth data that is expensive to acquire. More recent approaches explore the possibility of estimating scene depth and camera pose in a self-supervised learning framework. Despite encouraging results are shown, current methods either learn from monocular videos for depth and pose and typically do so without enforcing multi-view geometry constraints between scene structure and camera motion, or require stereo sequences as input where the ground-truth between-frame motion parameters need to be known. In this paper we propose to jointly optimize the scene depth and camera motion via incorporating differentiable Bundle Adjustment (BA) layer by minimizing the feature-metric error, and then form the photometric consistency loss with view synthesis as the final supervisory signal. The proposed approach only needs unlabeled monocular videos as input, and extensive experiments on the KITTI and Cityscapes dataset show that our method achieves state-of-the-art results in self-supervised approaches using monocular videos as input, and even gains advantage to the line of methods that learns from calibrated stereo sequences (i.e. with pose supervision).

READ FULL TEXT

page 1

page 3

page 7

page 8

research
08/22/2023

WS-SfMLearner: Self-supervised Monocular Depth and Ego-motion Estimation on Surgical Videos with Unknown Camera Parameters

Depth estimation in surgical video plays a crucial role in many image-gu...
research
09/19/2019

Self-Supervised Monocular Depth Hints

Monocular depth estimators can be trained with various forms of self-sup...
research
03/30/2021

Endo-Depth-and-Motion: Localization and Reconstruction in Endoscopic Videos using Depth Networks and Photometric Constraints

Estimating a scene reconstruction and the camera motion from in-body vid...
research
07/25/2020

Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach

The ability to efficiently utilize crowdsourced visual data carries imme...
research
12/12/2022

CbwLoss: Constrained Bidirectional Weighted Loss for Self-supervised Learning of Depth and Pose

Photometric differences are widely used as supervision signals to train ...
research
12/01/2017

Learning Depth from Monocular Videos using Direct Methods

The ability to predict depth from a single image - using recent advances...
research
06/03/2019

Y-GAN: A Generative Adversarial Network for Depthmap Estimation from Multi-camera Stereo Images

Depth perception is a key component for autonomous systems that interact...

Please sign up or login with your details

Forgot password? Click here to reset