Online Learning of Smooth Functions

01/04/2023
by   Jesse Geneson, et al.
0

In this paper, we study the online learning of real-valued functions where the hidden function is known to have certain smoothness properties. Specifically, for q ≥ 1, let ℱ_q be the class of absolutely continuous functions f: [0,1] →ℝ such that f'_q ≤ 1. For q ≥ 1 and d ∈ℤ^+, let ℱ_q,d be the class of functions f: [0,1]^d →ℝ such that any function g: [0,1] →ℝ formed by fixing all but one parameter of f is in ℱ_q. For any class of real-valued functions ℱ and p>0, let opt_p(ℱ) be the best upper bound on the sum of p^th powers of absolute prediction errors that a learner can guarantee in the worst case. In the single-variable setup, we find new bounds for opt_p(ℱ_q) that are sharp up to a constant factor. We show for all ε∈ (0, 1) that opt_1+ε(ℱ_∞) = Θ(ε^-1/2) and opt_1+ε(ℱ_q) = Θ(ε^-1/2) for all q ≥ 2. We also show for ε∈ (0,1) that opt_2(ℱ_1+ε)=Θ(ε^-1). In addition, we obtain new exact results by proving that opt_p(ℱ_q)=1 for q ∈ (1,2) and p ≥ 2+1/q-1. In the multi-variable setup, we establish inequalities relating opt_p(ℱ_q,d) to opt_p(ℱ_q) and show that opt_p(ℱ_∞,d) is infinite when p<d and finite when p>d. We also obtain sharp bounds on learning ℱ_∞,d for p < d when the number of trials is bounded.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset