Analysis of Deep Ritz Methods for Laplace Equations with Dirichlet Boundary Conditions

by   Chenguang Duan, et al.

Deep Ritz methods (DRM) have been proven numerically to be efficient in solving partial differential equations. In this paper, we present a convergence rate in H^1 norm for deep Ritz methods for Laplace equations with Dirichlet boundary condition, where the error depends on the depth and width in the deep neural networks and the number of samples explicitly. Further we can properly choose the depth and width in the deep neural networks in terms of the number of training samples. The main idea of the proof is to decompose the total error of DRM into three parts, that is approximation error, statistical error and the error caused by the boundary penalty. We bound the approximation error in H^1 norm with ReLU^2 networks and control the statistical error via Rademacher complexity. In particular, we derive the bound on the Rademacher complexity of the non-Lipschitz composition of gradient norm with ReLU^2 network, which is of immense independent interest. We also analysis the error inducing by the boundary penalty method and give a prior rule for tuning the penalty parameter.


page 1

page 2

page 3

page 4


Convergence Analysis for the PINNs

In recent years, physical informed neural networks (PINNs) have been sho...

Convergence Analysis of the Deep Galerkin Method for Weak Solutions

This paper analyzes the convergence rate of a deep Galerkin method for t...

Convergence Rate Analysis for Deep Ritz Method

Using deep neural networks to solve PDEs has attracted a lot of attentio...

Error Estimates for the Variational Training of Neural Networks with Boundary Penalty

We establish estimates on the error made by the Ritz method for quadrati...

Error Analysis of Deep Ritz Methods for Elliptic Equations

Using deep neural networks to solve PDEs has attracted a lot of attentio...

Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification

Classic learning theory suggests that proper regularization is the key t...

Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units

This paper presents a general framework for norm-based capacity control ...

Please sign up or login with your details

Forgot password? Click here to reset