Improved Dynamic Regret for Online Frank-Wolfe

02/11/2023
by   Yuanyu Wan, et al.
0

To deal with non-stationary online problems with complex constraints, we investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization. It is well-known that in the setting of offline optimization, the smoothness of functions and the strong convexity of functions accompanying specific properties of constraint sets can be utilized to achieve fast convergence rates for the Frank-Wolfe (FW) algorithm. However, for OFW, previous studies only establish a dynamic regret bound of O(√(T)(1+V_T+√(D_T))) by utilizing the convexity of problems, where T is the number of rounds, V_T is the function variation, and D_T is the gradient variation. In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization. The key technique for this extension is to set the step size of OFW with a line search rule. In this way, we first show that the dynamic regret bound of OFW can be improved to O(√(T(1+V_T))) for smooth functions. Second, we achieve a better dynamic regret bound of O((1+V_T)^2/3T^1/3) when functions are smooth and strongly convex, and the constraint set is strongly convex. Finally, for smooth and strongly convex functions with minimizers in the interior of the constraint set, we demonstrate that the dynamic regret of OFW reduces to O(1+V_T), and can be further strengthened to O(min{P_T^∗,S_T^∗,V_T}+1) by performing a constant number of FW iterations per round, where P_T^∗ and S_T^∗ denote the path length and squared path length of minimizers, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset