An analysis of Ermakov-Zolotukhin quadrature using kernels

by   Ayoub Belhadji, et al.
ENS Lyon

We study a quadrature, proposed by Ermakov and Zolotukhin in the sixties, through the lens of kernel methods. The nodes of this quadrature rule follow the distribution of a determinantal point process, while the weights are defined through a linear system, similarly to the optimal kernel quadrature. In this work, we show how these two classes of quadrature are related, and we prove a tractable formula of the expected value of the squared worst-case integration error on the unit ball of an RKHS of the former quadrature. In particular, this formula involves the eigenvalues of the corresponding kernel and leads to improving on the existing theoretical guarantees of the optimal kernel quadrature with determinantal point processes.


Acceleration of the kernel herding algorithm by improved gradient approximation

Kernel herding is a method used to construct quadrature formulas in a re...

Kernel quadrature by applying a point-wise gradient descent method to discrete energies

We propose a method for generating nodes for kernel quadrature by a poin...

Positively Weighted Kernel Quadrature via Subsampling

We study kernel quadrature rules with positive weights for probability m...

Integration in reproducing kernel Hilbert spaces of Gaussian kernels

The Gaussian kernel plays a central role in machine learning, uncertaint...

Kernel quadrature with DPPs

We study quadrature rules for functions living in an RKHS, using nodes s...

On the Size of the Online Kernel Sparsification Dictionary

We analyze the size of the dictionary constructed from online kernel spa...

Reified unit resolution and the failed literal rule

Unit resolution can simplify a CNF formula or detect an inconsistency by...

Please sign up or login with your details

Forgot password? Click here to reset