Unsupervised Domain Adaptation for Visual Navigation

by   Shangda Li, et al.
Carnegie Mellon University

Advances in visual navigation methods have led to intelligent embodied navigation agents capable of learning meaningful representations from raw RGB images and perform a wide variety of tasks involving structural and semantic reasoning. However, most learning-based navigation policies are trained and tested in simulation environments. In order for these policies to be practically useful, they need to be transferred to the real-world. In this paper, we propose an unsupervised domain adaptation method for visual navigation. Our method translates the images in the target domain to the source domain such that the translation is consistent with the representations learned by the navigation policy. The proposed method outperforms several baselines across two different navigation tasks in simulation. We further show that our method can be used to transfer the navigation policies learned in simulation to the real world.


page 2

page 3

page 7

page 8

page 14

page 15


On Embodied Visual Navigation in Real Environments Through Habitat

Visual navigation models based on deep learning can learn effective poli...

Domain Adaptation Through Task Distillation

Deep networks devour millions of precisely annotated images to build the...

An in-depth experimental study of sensor usage and visual reasoning of robots navigating in real environments

Visual navigation by mobile robots is classically tackled through SLAM p...

Out of the Box: Embodied Navigation in the Real World

The research field of Embodied AI has witnessed substantial progress in ...

Analyzing Visual Representations in Embodied Navigation Tasks

Recent advances in deep reinforcement learning require a large amount of...

Building Intelligent Autonomous Navigation Agents

Breakthroughs in machine learning in the last decade have led to `digita...

Please sign up or login with your details

Forgot password? Click here to reset