Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study

10/04/2022
by   Hongbin Lin, et al.
0

Learning high-performance deep neural networks for dynamic modeling of high Degree-Of-Freedom (DOF) robots remains challenging due to the sampling complexity. Typical unknown system disturbance caused by unmodeled dynamics (such as internal compliance, cables) further exacerbates the problem. In this paper, a novel framework characterized by both high data efficiency and disturbance-adapting capability is proposed to address the problem of modeling gravitational dynamics using deep nets in feedforward gravity compensation control for high-DOF master manipulators with unknown disturbance. In particular, Feedforward Deep Neural Networks (FDNNs) are learned from both prior knowledge of an existing analytical model and observation of the robot system by Knowledge Distillation (KD). Through extensive experiments in high-DOF master manipulators with significant disturbance, we show that our method surpasses a standard Learning-from-Scratch (LfS) approach in terms of data efficiency and disturbance adaptation. Our initial feasibility study has demonstrated the potential of outperforming the analytical teacher model as the training data increases.

READ FULL TEXT

page 1

page 6

research
01/26/2019

Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks

Much of the focus in the area of knowledge distillation has been on dist...
research
07/06/2021

Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation

Though convolutional neural networks are widely used in different tasks,...
research
07/11/2023

The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework

In the context of label-efficient learning on video data, the distillati...
research
10/15/2020

Spherical Knowledge Distillation

Knowledge distillation aims at obtaining a small but effective deep mode...
research
08/29/2021

Lipschitz Continuity Guided Knowledge Distillation

Knowledge distillation has become one of the most important model compre...
research
01/28/2022

On feedforward control using physics-guided neural networks: Training cost regularization and optimized initialization

Performance of model-based feedforward controllers is typically limited ...

Please sign up or login with your details

Forgot password? Click here to reset