Collaborative Robot Learning from Demonstrations using Hidden Markov Model State Distribution

by   Sulabh Kumra, et al.
Rochester Institute of Technology

In robotics, there is need of an interactive and expedite learning method as experience is expensive. Robot Learning from Demonstration (RLfD) enables a robot to learn a policy from demonstrations performed by teacher. RLfD enables a human user to add new capabilities to a robot in an intuitive manner, without explicitly reprogramming it. In this work, we present a novel interactive framework, where a collaborative robot learns skills for trajectory based tasks from demonstrations performed by a human teacher. The robot extracts features from each demonstration called as key-points and learns a model of the demonstrated skill using Hidden Markov Model (HMM). Our experimental results show that the learned model can be used to produce a generalized trajectory based skill.


page 1

page 3

page 7


Inferring the Geometric Nullspace of Robot Skills from Human Demonstrations

In this paper we present a framework to learn skills from human demonstr...

Interactive Policy Learning through Confidence-Based Autonomy

We present Confidence-Based Autonomy (CBA), an interactive algorithm for...

Efficient Model Learning for Human-Robot Collaborative Tasks

We present a framework for learning human user models from joint-action ...

Keyframe Demonstration Seeded and Bayesian Optimized Policy Search

This paper introduces a novel Learning from Demonstration framework to l...

Robot Learning from Demonstration Using Elastic Maps

Learning from Demonstration (LfD) is a popular method of reproducing and...

A Robot that Learns Connect Four Using Game Theory and Demonstrations

Teaching robots new skills using minimal time and effort has long been a...

Please sign up or login with your details

Forgot password? Click here to reset