Second-Order Kernel Online Convex Optimization with Adaptive Sketching

06/15/2017
by   Daniele Calandriello, et al.
0

Kernel online convex optimization (KOCO) is a framework combining the expressiveness of non-parametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only O(t) time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal O(√(T)) regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve O((Det(K))) regret, which we show scales as O(d_eff T), where d_eff is the effective dimension of the problem and is usually much smaller than O(√(T)). The main drawback of second-order methods is their much higher O(t^2) space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves O(d_eff T) regret. To address the computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix K_t, and show that for a chosen parameter γ≤ 1 our Sketched-KONS reduces the space and time complexity by a factor of γ^2 to O(t^2γ^2) space and time per iteration, while incurring only 1/γ times more regret.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2018

Fast Rates for Online Gradient Descent Without Strong Convexity via Hoffman's Bound

Hoffman's classical result gives a bound on the distance of a point from...
research
11/25/2019

Projective Quadratic Regression for Online Learning

This paper considers online convex optimization (OCO) problems - the par...
research
05/30/2022

Optimal and Adaptive Monteiro-Svaiter Acceleration

We develop a variant of the Monteiro-Svaiter (MS) acceleration framework...
research
07/03/2019

Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses

In this paper, we study large-scale convex optimization algorithms based...
research
06/01/2019

Adaptive Online Learning for Gradient-Based Optimizers

As application demands for online convex optimization accelerate, the ne...
research
12/26/2022

Improved Kernel Alignment Regret Bound for Online Kernel Learning

In this paper, we improve the kernel alignment regret bound for online k...
research
06/14/2023

Nearly Optimal Algorithms with Sublinear Computational Complexity for Online Kernel Regression

The trade-off between regret and computational cost is a fundamental pro...

Please sign up or login with your details

Forgot password? Click here to reset