Generative adversarial training of product of policies for robust and adaptive movement primitives

11/06/2020
by   Emmanuel Pignat, et al.
0

In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which often are the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 9

page 10

page 11

page 15

research
02/07/2023

SDYN-GANs: Adversarial Learning Methods for Multistep Generative Models for General Order Stochastic Dynamics

We introduce adversarial learning methods for data-driven generative mod...
research
05/17/2021

An SDE Framework for Adversarial Training, with Convergence and Robustness Analysis

Adversarial training has gained great popularity as one of the most effe...
research
11/20/2019

Adversarial Robustness of Flow-Based Generative Models

Flow-based generative models leverage invertible generator functions to ...
research
03/30/2021

Learning Robust Feedback Policies from Demonstrations

In this work we propose and analyze a new framework to learn feedback co...
research
09/10/2020

Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

Adversarial training, especially projected gradient descent (PGD), has b...
research
06/24/2021

Abstraction of Markov Population Dynamics via Generative Adversarial Nets

Markov Population Models are a widespread formalism used to model the dy...

Please sign up or login with your details

Forgot password? Click here to reset