Finding Fallen Objects Via Asynchronous Audio-Visual Integration

07/07/2022
by   Chuang Gan, et al.
6

The way an object looks and sounds provide complementary reflections of its physical properties. In many settings cues from vision and audition arrive asynchronously but must be integrated, as when we hear an object dropped on the floor and then must find it. In this paper, we introduce a setting in which to study multi-modal object localization in 3D virtual environments. An object is dropped somewhere in a room. An embodied robot agent, equipped with a camera and microphone, must determine what object has been dropped – and where – by combining audio and visual signals with knowledge of the underlying physics. To study this problem, we have generated a large-scale dataset – the Fallen Objects dataset – that includes 8000 instances of 30 physical object categories in 64 rooms. The dataset uses the ThreeDWorld platform which can simulate physics-based impact sounds and complex physical interactions between objects in a photorealistic setting. As a first step toward addressing this challenge, we develop a set of embodied agent baselines, based on imitation learning, reinforcement learning, and modular planning, and perform an in-depth analysis of the challenge of this new task.

READ FULL TEXT

page 1

page 4

page 7

research
06/10/2019

DensePhysNet: Learning Dense Physical Object Representations via Multi-step Dynamic Interactions

We study the problem of learning physical object representations for rob...
research
07/24/2020

Unsupervised Discovery of 3D Physical Objects from Video

We study the problem of unsupervised physical object discovery. Unlike e...
research
03/08/2019

Learning to Identify Object Instances by Touch: Tactile Recognition via Multimodal Matching

Much of the literature on robotic perception focuses on the visual modal...
research
05/10/2019

Do Autonomous Agents Benefit from Hearing?

Mapping states to actions in deep reinforcement learning is mainly based...
research
03/29/2018

Learning Kinematic Descriptions using SPARE: Simulated and Physical ARticulated Extendable dataset

Next generation robots will need to understand intricate and articulated...
research
07/09/2020

ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation

We introduce ThreeDWorld (TDW), a platform for interactive multi-modal p...
research
09/08/2021

YouRefIt: Embodied Reference Understanding with Language and Gesture

We study the understanding of embodied reference: One agent uses both la...

Please sign up or login with your details

Forgot password? Click here to reset