DFBVS: Deep Feature-Based Visual Servo

by   Nicholas Adrian, et al.

Classical Visual Servoing (VS) rely on handcrafted visual features, which limit their generalizability. Recently, a number of approaches, some based on Deep Neural Networks, have been proposed to overcome this limitation by comparing directly the entire target and current camera images. However, by getting rid of the visual features altogether, those approaches require the target and current images to be essentially similar, which precludes the generalization to unknown, cluttered, scenes. Here we propose to perform VS based on visual features as in classical VS approaches but, contrary to the latter, we leverage recent breakthroughs in Deep Learning to automatically extract and match the visual features. By doing so, our approach enjoys the advantages from both worlds: (i) because our approach is based on visual features, it is able to steer the robot towards the object of interest even in presence of significant distraction in the background; (ii) because the features are automatically extracted and matched, our approach can easily and automatically generalize to unseen objects and scenes. In addition, we propose to use a render engine to synthesize the target image, which offers a further level of generalization. We demonstrate these advantages in a robotic grasping task, where the robot is able to steer, with high accuracy, towards the object to grasp, based simply on an image of the object rendered from the camera view corresponding to the desired robot grasping pose.


page 1

page 2

page 4

page 5

page 6


Semantically Grounded Object Matching for Robust Robotic Scene Rearrangement

Object rearrangement has recently emerged as a key competency in robot m...

Real-Time Deep Learning Approach to Visual Servo Control and Grasp Detection for Autonomous Robotic Manipulation

In order to explore robotic grasping in unstructured and dynamic environ...

Learning Visual Servoing with Deep Features and Fitted Q-Iteration

Visual servoing involves choosing actions that move a robot in response ...

MPPI-VS: Sampling-Based Model Predictive Control Strategy for Constrained Image-Based and Position-Based Visual Servoing

In this paper, we open up new avenues for visual servoing systems built ...

INVIGORATE: Interactive Visual Grounding and Grasping in Clutter

This paper presents INVIGORATE, a robot system that interacts with human...

Online Deep Clustering with Video Track Consistency

Several unsupervised and self-supervised approaches have been developed ...

DFVS: Deep Flow Guided Scene Agnostic Image Based Visual Servoing

Existing deep learning based visual servoing approaches regress the rela...

Please sign up or login with your details

Forgot password? Click here to reset