NavDreams: Towards Camera-Only RL Navigation Among Humans

03/23/2022
by   Daniel Dugas, et al.
0

Autonomously navigating a robot in everyday crowded spaces requires solving complex perception and planning challenges. When using only monocular image sensor data as input, classical two-dimensional planning approaches cannot be used. While images present a significant challenge when it comes to perception and planning, they also allow capturing potentially important details, such as complex geometry, body movement, and other visual cues. In order to successfully solve the navigation task from only images, algorithms must be able to model the scene and its dynamics using only this channel of information. We investigate whether the world model concept, which has shown state-of-the-art results for modeling and learning policies in Atari games as well as promising results in 2D LiDAR-based crowd navigation, can also be applied to the camera-based navigation problem. To this end, we create simulated environments where a robot must navigate past static and moving humans without colliding in order to reach its goal. We find that state-of-the-art methods are able to achieve success in solving the navigation problem, and can generate dream-like predictions of future image-sequences which show consistent geometry and moving persons. We are also able to show that policy performance in our high-fidelity sim2real simulation scenario transfers to the real world by testing the policy on a real robot. We make our simulator, models and experiments available at https://github.com/danieldugas/NavDreams.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

research
03/20/2020

Visual Navigation Among Humans with Optimal Control as a Supervisor

Real world navigation requires robots to operate in unfamiliar, dynamic ...
research
12/13/2022

Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results

Navigation is one of the most heavily studied problems in robotics, and ...
research
04/07/2020

CrowdSteer: Realtime Smooth and Collision-Free Robot Navigation in Dense Crowd Scenarios Trained using High-Fidelity Simulation

We present a novel high fidelity 3-D simulator that significantly reduce...
research
05/07/2022

AdaptiveON: Adaptive Outdoor Navigation Method For Stable and Reliable Actions

We present a novel outdoor navigation algorithm to generate stable and e...
research
05/18/2019

SplitNet: Sim2Sim and Task2Task Transfer for Embodied Visual Navigation

We propose SplitNet, a method for decoupling visual perception and polic...
research
05/12/2021

Out of the Box: Embodied Navigation in the Real World

The research field of Embodied AI has witnessed substantial progress in ...
research
10/10/2022

NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields

We present a system for applying sim2real approaches to "in the wild" sc...

Please sign up or login with your details

Forgot password? Click here to reset