Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition

01/23/2020
by   Lars Grüne, et al.
0

We propose a deep neural network architecture for storing approximate Lyapunov functions of systems of ordinary differential equations. Under a small-gain condition on the system, the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2020

Computing Lyapunov functions using deep neural networks

We propose a deep neural network architecture and a training algorithm f...
research
04/12/2023

Deep neural network approximation of composite functions without the curse of dimensionality

In this article we identify a general class of high-dimensional continuo...
research
12/08/2020

High-dimensional approximation spaces of artificial neural networks and applications to partial differential equations

In this paper we develop a new machinery to study the capacity of artifi...
research
09/27/2021

Lyapunov-Net: A Deep Neural Network Architecture for Lyapunov Function Approximation

We develop a versatile deep neural network architecture, called Lyapunov...
research
07/13/2022

Compositional Sparsity, Approximation Classes, and Parametric Transport Equations

Approximating functions of a large number of variables poses particular ...

Please sign up or login with your details

Forgot password? Click here to reset