AlphaBlock: Embodied Finetuning for Vision-Language Reasoning in Robot Manipulation

by   Chuhao Jin, et al.

We propose a novel framework for learning high-level cognitive capabilities in robot manipulation tasks, such as making a smiley face using building blocks. These tasks often involve complex multi-step reasoning, presenting significant challenges due to the limited paired data connecting human instructions (e.g., making a smiley face) and robot actions (e.g., end-effector movement). Existing approaches relieve this challenge by adopting an open-loop paradigm decomposing high-level instructions into simple sub-task plans, and executing them step-by-step using low-level control models. However, these approaches are short of instant observations in multi-step reasoning, leading to sub-optimal results. To address this issue, we propose to automatically collect a cognitive robot dataset by Large Language Models (LLMs). The resulting dataset AlphaBlock consists of 35 comprehensive high-level tasks of multi-step text plans and paired observation sequences. To enable efficient data acquisition, we employ elaborated multi-round prompt designs that effectively reduce the burden of extensive human involvement. We further propose a closed-loop multi-modal embodied planning model that autoregressively generates plans by taking image observations as input. To facilitate effective learning, we leverage MiniGPT-4 with a frozen visual encoder and LLM, and finetune additional vision adapter and Q-former to enable fine-grained spatial perception for manipulation tasks. We conduct experiments to verify the superiority over existing open and closed-loop methods, and achieve a significant increase in success rate by 21.4 based robot tasks. Real-world demos are shown in .


page 4

page 6

page 13

page 15


Learning Pneumatic Non-Prehensile Manipulation with a Mobile Blower

We investigate pneumatic non-prehensile manipulation (i.e., blowing) as ...

SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models

In this work, we introduce SMART-LLM, an innovative framework designed f...

Embodied Executable Policy Learning with Language-based Scene Summarization

Large Language models (LLMs) have shown remarkable success in assisting ...

RoCo: Dialectic Multi-Robot Collaboration with Large Language Models

We propose a novel approach to multi-robot collaboration that harnesses ...

The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and a Tongue-Drive

A human-in-the-loop system is proposed to enable collaborative manipulat...

Conformal Temporal Logic Planning using Large Language Models: Knowing When to Do What and When to Ask for Help

This paper addresses a new motion planning problem for mobile robots tas...

Language to Rewards for Robotic Skill Synthesis

Large language models (LLMs) have demonstrated exciting progress in acqu...

Please sign up or login with your details

Forgot password? Click here to reset