Flow-Motion and Depth Network for Monocular Stereo and Beyond

09/12/2019
by   Kaixuan Wang, et al.
7

We propose a learning-based method that solves monocular stereo and can be extended to fuse depth information from multiple target frames. Given two unconstrained images from a monocular camera with known intrinsic calibration, our network estimates relative camera poses and the depth map of the source image. The core contribution of the proposed method is threefold. First, a network is tailored for static scenes that jointly estimates the optical flow and camera motion. By the joint estimation, the optical flow search space is gradually reduced resulting in an efficient and accurate flow estimation. Second, a novel triangulation layer is proposed to encode the estimated optical flow and camera motion while avoiding common numerical issues caused by epipolar. Third, beyond two-view depth estimation, we further extend the above networks to fuse depth information from multiple target images and estimate the depth map of the source image. To further benefit the research community, we introduce tools to generate photorealistic structure-from-motion datasets such that deep networks can be well trained and evaluated. The proposed method is compared with previous methods and achieves state-of-the-art results within less time. Images from real-world applications and Google Earth are used to demonstrate the generalization ability of the method.

READ FULL TEXT

page 7

page 8

page 12

page 13

page 14

page 15

page 16

page 17

research
10/08/2018

Joint Unsupervised Learning of Optical Flow and Depth by Watching Stereo Videos

Learning depth and optical flow via deep neural networks by watching vid...
research
12/07/2016

DeMoN: Depth and Motion Network for Learning Monocular Stereo

In this paper we formulate structure from motion as a learning problem. ...
research
02/16/2016

Fast, Robust, Continuous Monocular Egomotion Computation

We propose robust methods for estimating camera egomotion in noisy, real...
research
03/13/2011

SO(3)-invariant asymptotic observers for dense depth field estimation based on visual data and known camera motion

In this paper, we use known camera motion associated to a video sequence...
research
10/14/2021

DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors

In this paper, a marker-based, single-person optical motion capture meth...
research
04/02/2020

Learning to See Through Obstructions

We present a learning-based approach for removing unwanted obstructions,...
research
06/30/2011

Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

An algorithm for pose and motion estimation using corresponding features...

Please sign up or login with your details

Forgot password? Click here to reset