Conditional mean embeddings as regressors - supplementary

05/21/2012
by   Steffen Grunewalder, et al.
0

We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS) embeddings of conditional distributions and vector-valued regressors. This connection introduces a natural regularized loss function which the RKHS embeddings minimise, providing an intuitive understanding of the embeddings and a justification for their use. Furthermore, the equivalence allows the application of vector-valued regression methods and results to the problem of learning conditional distributions. Using this link we derive a sparse version of the embedding by considering alternative formulations. Further, by applying convergence results for vector-valued regression to the embedding problem we derive minimax convergence rates which are O((n)/n) -- compared to current state of the art rates of O(n^-1/4) -- and are valid under milder and more intuitive assumptions. These minimax upper rates coincide with lower rates up to a logarithmic factor, showing that the embedding method achieves nearly optimal rates. We study our sparse embedding algorithm in a reinforcement learning task where the algorithm shows significant improvement in sparsity over an incomplete Cholesky decomposition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2022

Optimal Rates for Regularized Conditional Mean Embedding Learning

We address the consistency of a kernel ridge regression estimate of the ...
research
05/16/2021

Sobolev Norm Learning Rates for Conditional Mean Embeddings

We develop novel learning rates for conditional mean embeddings by apply...
research
02/12/2023

Recursive Estimation of Conditional Kernel Mean Embeddings

Kernel mean embeddings, a widely used technique in machine learning, map...
research
02/10/2020

A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings

We present a new operator-free, measure-theoretic definition of the cond...
research
09/29/2010

Optimal learning rates for Kernel Conjugate Gradient regression

We prove rates of convergence in the statistical sense for kernel-based ...
research
02/24/2017

Learning Rates for Kernel-Based Expectile Regression

Conditional expectiles are becoming an increasingly important tool in fi...
research
11/07/2016

Optimal rates for the regularized learning algorithms under general source condition

We consider the learning algorithms under general source condition with ...

Please sign up or login with your details

Forgot password? Click here to reset