Calibrating Lévy Process from Observations Based on Neural Networks and Automatic Differentiation with Convergence Proofs

12/20/2018
by   Kailai Xu, et al.
8

The Lévy process has been widely applied to mathematical finance, quantum mechanics, peridynamics, and so on. However, calibrating the nonparametric multivariate distribution related to the Lévy process from observations is a very challenging problem due to the lack of explicit distribution functions. In this paper, we propose a novel algorithm based on neural networks and automatic differentiation for solving this problem. We use neural networks to approximate the nonparametric part and discretize the characteristic exponents using accuracy numerical quadratures. Automatic differentiation is then applied to compute gradients and we minimize the mismatch between empirical and exact characteristic exponents using first-order optimization approaches. Another distinctive contribution of our work is that we made an effort to investigate the approximation ability of neural networks and the convergence behavior of algorithms. We derived the estimated number of neurons for a two-layer neural network. To achieve an accuracy of ε with the input dimension d, it is sufficient to build O((d/ε)^2) and O(d/ε) for the first and second layers. The numbers are polynomial in the input dimension compared to the exponential O(ε^-d) for one. We also give the convergence proof of the neural network concerning the training samples under mild assumptions and show that the RMSE decreases linearly in the number of training data in the consistency error dominancy region for the 2D problem. It is the first-ever convergence analysis for such an algorithm in literature to our best knowledge. Finally, we apply the algorithms to the stock markets and reveal some interesting patterns in the pairwise α index.

READ FULL TEXT

page 21

page 23

page 24

page 25

page 26

research
09/26/2019

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

It has been observed zhang2016understanding that deep neural networks ca...
research
01/31/2023

On the Correctness of Automatic Differentiation for Neural Networks with Machine-Representable Parameters

Recent work has shown that automatic differentiation over the reals is a...
research
11/21/2017

Sparse-Input Neural Networks for High-dimensional Nonparametric Regression and Classification

Neural networks are usually not the tool of choice for nonparametric hig...
research
10/26/2020

Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians

Hyperparameter optimization of neural networks can be elegantly formulat...
research
12/12/2021

Automatic differentiation approach for reconstructing spectral functions with neural networks

Reconstructing spectral functions from Euclidean Green's functions is an...
research
06/03/2023

Correcting auto-differentiation in neural-ODE training

Does the use of auto-differentiation yield reasonable updates to deep ne...
research
03/01/2021

Wide Network Learning with Differential Privacy

Despite intense interest and considerable effort, the current generation...

Please sign up or login with your details

Forgot password? Click here to reset