VOILA: Visual-Observation-Only Imitation Learning for Autonomous Navigation

by   Haresh Karnan, et al.

While imitation learning for vision based autonomous mobile robot navigation has recently received a great deal of attention in the research community, existing approaches typically require state action demonstrations that were gathered using the deployment platform. However, what if one cannot easily outfit their platform to record these demonstration signals or worse yet the demonstrator does not have access to the platform at all? Is imitation learning for vision based autonomous navigation even possible in such scenarios? In this work, we hypothesize that the answer is yes and that recent ideas from the Imitation from Observation (IfO) literature can be brought to bear such that a robot can learn to navigate using only ego centric video collected by a demonstrator, even in the presence of viewpoint mismatch. To this end, we introduce a new algorithm, Visual Observation only Imitation Learning for Autonomous navigation (VOILA), that can successfully learn navigation policies from a single video demonstration collected from a physically different agent. We evaluate VOILA in the photorealistic AirSim simulator and show that VOILA not only successfully imitates the expert, but that it also learns navigation policies that can generalize to novel environments. Further, we demonstrate the effectiveness of VOILA in a real world setting by showing that it allows a wheeled Jackal robot to successfully imitate a human walking in an environment using a video recorded using a mobile phone camera.


page 2

page 4

page 5

page 6

page 7


Imitation Learning from Video by Leveraging Proprioception

Classically, imitation learning algorithms have been developed for ideal...

Convolutional Neural Networks Towards Arduino Navigation of Indoor Environments

In this paper we propose a number of tested ways in which a low-budget d...

Topological Navigation Graph

In this article, we focus on the utilisation of reactive trajectory imit...

Seeing Differently, Acting Similarly: Imitation Learning with Heterogeneous Observations

In many real-world imitation learning tasks, the demonstrator and the le...

LaND: Learning to Navigate from Disengagements

Consistently testing autonomous mobile robots in real world scenarios is...

Augmenting Imitation Experience via Equivariant Representations

The robustness of visual navigation policies trained through imitation o...

Recent Advances in Imitation Learning from Observation

Imitation learning is the process by which one agent tries to learn how ...

Please sign up or login with your details

Forgot password? Click here to reset