Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models

06/22/2021
by   Lenart Treven, et al.
14

Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification. While many deterministic learning algorithms have been designed based on numerical integration via the adjoint method, many downstream tasks such as active learning, exploration in reinforcement learning, robust control, or filtering require accurate estimates of predictive uncertainties. In this work, we propose a novel approach towards estimating epistemically uncertain neural ODEs, avoiding the numerical integration bottleneck. Instead of modeling uncertainty in the ODE parameters, we directly model uncertainties in the state space. Our algorithm - distributional gradient matching (DGM) - jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss. Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.

READ FULL TEXT

page 33

page 34

page 41

research
10/20/2022

Neural ODEs as Feedback Policies for Nonlinear Optimal Control

Neural ordinary differential equations (Neural ODEs) model continuous ti...
research
12/12/2020

Faster Policy Learning with Continuous-Time Gradients

We study the estimation of policy gradients for continuous-time systems ...
research
03/01/2019

Discrete gradients for computational Bayesian inference

In this paper, we exploit the gradient flow structure of continuous-time...
research
07/27/2022

Distributional Actor-Critic Ensemble for Uncertainty-Aware Continuous Control

Uncertainty quantification is one of the central challenges for machine ...
research
05/19/2017

Scalable Variational Inference for Dynamical Systems

Gradient matching is a promising tool for learning parameters and state ...
research
05/24/2022

Distributional Hamilton-Jacobi-Bellman Equations for Continuous-Time Reinforcement Learning

Continuous-time reinforcement learning offers an appealing formalism for...
research
06/10/2023

How to Learn and Generalize From Three Minutes of Data: Physics-Constrained and Uncertainty-Aware Neural Stochastic Differential Equations

We present a framework and algorithms to learn controlled dynamics model...

Please sign up or login with your details

Forgot password? Click here to reset