PACT: Perception-Action Causal Transformer for Autoregressive Robotics Pre-Training

09/22/2022
by   Rogerio Bonatti, et al.
7

Robotics has long been a field riddled with complex systems architectures whose modules and connections, whether traditional or learning-based, require significant human expertise and prior knowledge. Inspired by large pre-trained language models, this work introduces a paradigm for pre-training a general purpose representation that can serve as a starting point for multiple tasks on a given robot. We present the Perception-Action Causal Transformer (PACT), a generative transformer-based architecture that aims to build representations directly from robot data in a self-supervised fashion. Through autoregressive prediction of states and actions over time, our model implicitly encodes dynamics and behaviors for a particular robot. Our experimental evaluation focuses on the domain of mobile agents, where we show that this robot-specific representation can function as a single starting point to achieve distinct tasks such as safe navigation, localization and mapping. We evaluate two form factors: a wheeled robot that uses a LiDAR sensor as perception input (MuSHR), and a simulated agent that uses first-person RGB images (Habitat). We show that finetuning small task-specific networks on top of the larger pretrained model results in significantly better performance compared to training a single model from scratch for all tasks simultaneously, and comparable performance to training a separate large model for each task independently. By sharing a common good-quality representation across tasks we can lower overall model capacity and speed up the real-time deployment of such systems.

READ FULL TEXT

page 1

page 4

page 6

research
10/06/2022

Real-World Robot Learning with Masked Visual Pre-training

In this work, we explore self-supervised visual pre-training on images f...
research
06/16/2023

Robot Learning with Sensorimotor Pre-training

We present a self-supervised sensorimotor pre-training approach for robo...
research
03/22/2022

MetaMorph: Learning Universal Controllers with Transformers

Multiple domains like vision, natural language, and audio are witnessing...
research
03/07/2023

ConBaT: Control Barrier Transformer for Safe Policy Learning

Large-scale self-supervised models have recently revolutionized our abil...
research
03/22/2023

On Domain-Specific Pre-Training for Effective Semantic Perception in Agricultural Robotics

Agricultural robots have the prospect to enable more efficient and susta...
research
03/07/2022

Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers

In this work, we consider the problem of learning a perception model for...
research
02/22/2023

Robotic Perception-motion Synergy for Novel Rope Wrapping Tasks

This paper introduces a novel and general method to address the problem ...

Please sign up or login with your details

Forgot password? Click here to reset