New Generalization Bounds for Learning Kernels

12/17/2009
by   Corinna Cortes, et al.
0

This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log(p) dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a linear combination of p base kernels with an L_2 regularization whose dependency on p is only in p^1/4.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset