Calibrating Lévy Process from Observations Based on Neural Networks and Automatic Differentiation with Convergence Proofs

by   Kailai Xu, et al.

The Lévy process has been widely applied to mathematical finance, quantum mechanics, peridynamics, and so on. However, calibrating the nonparametric multivariate distribution related to the Lévy process from observations is a very challenging problem due to the lack of explicit distribution functions. In this paper, we propose a novel algorithm based on neural networks and automatic differentiation for solving this problem. We use neural networks to approximate the nonparametric part and discretize the characteristic exponents using accuracy numerical quadratures. Automatic differentiation is then applied to compute gradients and we minimize the mismatch between empirical and exact characteristic exponents using first-order optimization approaches. Another distinctive contribution of our work is that we made an effort to investigate the approximation ability of neural networks and the convergence behavior of algorithms. We derived the estimated number of neurons for a two-layer neural network. To achieve an accuracy of ε with the input dimension d, it is sufficient to build O((d/ε)^2) and O(d/ε) for the first and second layers. The numbers are polynomial in the input dimension compared to the exponential O(ε^-d) for one. We also give the convergence proof of the neural network concerning the training samples under mild assumptions and show that the RMSE decreases linearly in the number of training data in the consistency error dominancy region for the 2D problem. It is the first-ever convergence analysis for such an algorithm in literature to our best knowledge. Finally, we apply the algorithms to the stock markets and reveal some interesting patterns in the pairwise α index.


page 21

page 23

page 24

page 25

page 26


Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

It has been observed zhang2016understanding that deep neural networks ca...

On the Correctness of Automatic Differentiation for Neural Networks with Machine-Representable Parameters

Recent work has shown that automatic differentiation over the reals is a...

Sparse-Input Neural Networks for High-dimensional Nonparametric Regression and Classification

Neural networks are usually not the tool of choice for nonparametric hig...

Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians

Hyperparameter optimization of neural networks can be elegantly formulat...

Automatic differentiation approach for reconstructing spectral functions with neural networks

Reconstructing spectral functions from Euclidean Green's functions is an...

Correcting auto-differentiation in neural-ODE training

Does the use of auto-differentiation yield reasonable updates to deep ne...

Wide Network Learning with Differential Privacy

Despite intense interest and considerable effort, the current generation...

Please sign up or login with your details

Forgot password? Click here to reset