Towards Visual Ego-motion Learning in Robots

05/29/2017
by   Sudeep Pillai, et al.
0

Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2019

Scene Motion Decomposition for Learnable Visual Odometry

Optical Flow (OF) and depth are commonly used for visual odometry since ...
research
07/28/2020

Robust Ego and Object 6-DoF Motion Estimation and Tracking

The problem of tracking self-motion as well as motion of objects in the ...
research
04/17/2019

Self-Supervised Flow Estimation using Geometric Regularization with Applications to Camera Image and Grid Map Sequences

We present a self-supervised approach to estimate flow in camera image a...
research
04/05/2023

DEFLOW: Self-supervised 3D Motion Estimation of Debris Flow

Existing work on scene flow estimation focuses on autonomous driving and...
research
10/29/2021

Polyline Based Generative Navigable Space Segmentation for Autonomous Visual Navigation

Detecting navigable space is a fundamental capability for mobile robots ...
research
09/18/2017

LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation

This work proposes a novel deep network architecture to solve the camera...
research
07/29/2021

Using Visual Anomaly Detection for Task Execution Monitoring

Execution monitoring is essential for robots to detect and respond to fa...

Please sign up or login with your details

Forgot password? Click here to reset