Collaborative Robot Learning from Demonstrations using Hidden Markov Model State Distribution

09/27/2018
by   Sulabh Kumra, et al.
Rochester Institute of Technology
0

In robotics, there is need of an interactive and expedite learning method as experience is expensive. Robot Learning from Demonstration (RLfD) enables a robot to learn a policy from demonstrations performed by teacher. RLfD enables a human user to add new capabilities to a robot in an intuitive manner, without explicitly reprogramming it. In this work, we present a novel interactive framework, where a collaborative robot learns skills for trajectory based tasks from demonstrations performed by a human teacher. The robot extracts features from each demonstration called as key-points and learns a model of the demonstrated skill using Hidden Markov Model (HMM). Our experimental results show that the learned model can be used to produce a generalized trajectory based skill.

READ FULL TEXT

page 1

page 3

page 7

03/30/2021

Inferring the Geometric Nullspace of Robot Skills from Human Demonstrations

In this paper we present a framework to learn skills from human demonstr...
01/15/2014

Interactive Policy Learning through Confidence-Based Autonomy

We present Confidence-Based Autonomy (CBA), an interactive algorithm for...
05/24/2014

Efficient Model Learning for Human-Robot Collaborative Tasks

We present a framework for learning human user models from joint-action ...
01/19/2023

Keyframe Demonstration Seeded and Bayesian Optimized Policy Search

This paper introduces a novel Learning from Demonstration framework to l...
08/03/2022

Robot Learning from Demonstration Using Elastic Maps

Learning from Demonstration (LfD) is a popular method of reproducing and...
01/03/2020

A Robot that Learns Connect Four Using Game Theory and Demonstrations

Teaching robots new skills using minimal time and effort has long been a...

Please sign up or login with your details

Forgot password? Click here to reset