Calibrating Lévy Process from Observations Based on Neural Networks and Automatic Differentiation with Convergence Proofs
The Lévy process has been widely applied to mathematical finance, quantum mechanics, peridynamics, and so on. However, calibrating the nonparametric multivariate distribution related to the Lévy process from observations is a very challenging problem due to the lack of explicit distribution functions. In this paper, we propose a novel algorithm based on neural networks and automatic differentiation for solving this problem. We use neural networks to approximate the nonparametric part and discretize the characteristic exponents using accuracy numerical quadratures. Automatic differentiation is then applied to compute gradients and we minimize the mismatch between empirical and exact characteristic exponents using first-order optimization approaches. Another distinctive contribution of our work is that we made an effort to investigate the approximation ability of neural networks and the convergence behavior of algorithms. We derived the estimated number of neurons for a two-layer neural network. To achieve an accuracy of ε with the input dimension d, it is sufficient to build O((d/ε)^2) and O(d/ε) for the first and second layers. The numbers are polynomial in the input dimension compared to the exponential O(ε^-d) for one. We also give the convergence proof of the neural network concerning the training samples under mild assumptions and show that the RMSE decreases linearly in the number of training data in the consistency error dominancy region for the 2D problem. It is the first-ever convergence analysis for such an algorithm in literature to our best knowledge. Finally, we apply the algorithms to the stock markets and reveal some interesting patterns in the pairwise α index.
READ FULL TEXT