Parameter-free version of Adaptive Gradient Methods for Strongly-Convex Functions

06/11/2023
by   Deepak Gouda, et al.
0

The optimal learning rate for adaptive gradient methods applied to λ-strongly convex functions relies on the parameters λ and learning rate η. In this paper, we adapt a universal algorithm along the lines of Metagrad, to get rid of this dependence on λ and η. The main idea is to concurrently run multiple experts and combine their predictions to a master algorithm. This master enjoys O(d log T) regret bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro