Sharper bounds for online learning of smooth functions of a single variable

05/30/2021
by   Jesse Geneson, et al.
0

We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions. Let ℱ_q be the class of absolutely continuous functions f: [0, 1] →ℝ with ||f'||_q ≤ 1, and define opt_p(ℱ_q) as the best possible bound on the worst-case sum of the p^th powers of the absolute prediction errors over any number of trials. Kimber and Long (Theoretical Computer Science, 1995) proved for q ≥ 2 that opt_p(ℱ_q) = 1 when p ≥ 2 and opt_p(ℱ_q) = ∞ when p = 1. For 1 < p < 2 with p = 1+ϵ, the only known bound was opt_p(ℱ_q) = O(ϵ^-1) from the same paper. We show for all ϵ∈ (0, 1) and q ≥ 2 that opt_1+ϵ(ℱ_q) = Θ(ϵ^-1/2), where the constants in the bound do not depend on q. We also show that opt_1+ϵ(ℱ_∞) = Θ(ϵ^-1/2).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset