Online Frank-Wolfe with Unknown Delays

04/11/2022
by   Yuanyu Wan, et al.
0

The online Frank-Wolfe (OFW) method has gained much popularity for online convex optimization due to its projection-free property. Previous studies showed that for convex losses, OFW attains O(T^3/4) regret over general sets and O(T^2/3) regret over strongly convex sets, and if losses are strongly convex, these bounds can be improved to O(T^2/3) and O(√(T)), respectively. However, they assumed that each gradient queried by OFW is revealed immediately, which may not hold in practice. In this paper, we consider a more practical setting where gradients arrive with arbitrary and unknown delays, and propose delayed OFW which generalizes OFW to this setting. The main idea is to perform an update similar to OFW after receiving any gradient, and play the latest decision for each round. We first show that for convex losses, delayed OFW achieves O(T^3/4+dT^1/4) regret over general sets and O(T^2/3+dT^1/3) regret over strongly convex sets, where d is the maximum delay. Furthermore, we prove that for strongly convex losses, delayed OFW attains O(T^2/3+dlog T) regret over general sets and O(√(T)+dlog T) regret over strongly convex sets. Compared with regret bounds in the non-delayed setting, our results imply that the proposed method is robust to a relatively large amount of delay.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset