See to Touch: Learning Tactile Dexterity through Visual Incentives

09/21/2023
by   Irmak Güzey, et al.
0

Equipping multi-fingered robots with tactile sensing is crucial for achieving the precise, contact-rich, and dexterous manipulation that humans excel at. However, relying solely on tactile sensing fails to provide adequate cues for reasoning about objects' spatial configurations, limiting the ability to correct errors and adapt to changing situations. In this paper, we present Tactile Adaptation from Visual Incentives (TAVI), a new framework that enhances tactile-based dexterity by optimizing dexterous policies using vision-based rewards. First, we use a contrastive-based objective to learn visual representations. Next, we construct a reward function using these visual representations through optimal-transport based matching on one human demonstration. Finally, we use online reinforcement learning on our robot to optimize tactile-based policies that maximize the visual reward. On six challenging tasks, such as peg pick-and-place, unstacking bowls, and flipping slender objects, TAVI achieves a success rate of 73 Allegro robot hand. The increase in performance is 108 using tactile and vision-based rewards and 135 tactile observational input. Robot videos are best viewed on our project website: https://see-to-touch.github.io/.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset