Learning to Act with Affordance-Aware Multimodal Neural SLAM

01/24/2022
by   Zhiwei Jia, et al.
26

Recent years have witnessed an emerging paradigm shift toward embodied artificial intelligence, in which an agent must learn to solve challenging tasks by interacting with its environment. There are several challenges in solving embodied multimodal tasks, including long-horizon planning, vision-and-language grounding, and efficient exploration. We focus on a critical bottleneck, namely the performance of planning and navigation. To tackle this challenge, we propose a Neural SLAM approach that, for the first time, utilizes several modalities for exploration, predicts an affordance-aware semantic map, and plans over it at the same time. This significantly improves exploration efficiency, leads to robust long-horizon planning, and enables effective vision-and-language grounding. With the proposed Affordance-aware Multimodal Neural SLAM (AMSLAM) approach, we obtain more than 40% improvement over prior published work on the ALFRED benchmark and set a new state-of-the-art generalization performance at a success rate of 23.48% on the test unseen scenes.

READ FULL TEXT

page 4

page 5

page 7

page 14

page 16

research
08/31/2023

Graph-based SLAM-Aware Exploration with Prior Topo-Metric Information

Autonomous exploration requires the robot to explore an unknown environm...
research
10/25/2021

History Aware Multimodal Transformer for Vision-and-Language Navigation

Vision-and-language navigation (VLN) aims to build autonomous visual age...
research
02/23/2022

Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation

Following language instructions to navigate in unseen environments is a ...
research
08/26/2021

SASRA: Semantically-aware Spatio-temporal Reasoning Agent for Vision-and-Language Navigation in Continuous Environments

This paper presents a novel approach for the Vision-and-Language Navigat...
research
04/26/2023

Multimodal Grounding for Embodied AI via Augmented Reality Headsets for Natural Language Driven Task Planning

Recent advances in generative modeling have spurred a resurgence in the ...
research
11/29/2017

HoME: a Household Multimodal Environment

We introduce HoME: a Household Multimodal Environment for artificial age...
research
04/05/2023

Personality-aware Human-centric Multimodal Reasoning: A New Task

Multimodal reasoning, an area of artificial intelligence that aims at ma...

Please sign up or login with your details

Forgot password? Click here to reset