Acceleration through Optimistic No-Regret Dynamics

07/27/2018
by   Jun-Kun Wang, et al.
0

We consider the problem of minimizing a smooth convex function by reducing the optimization to computing the Nash equilibrium of a particular zero-sum convex-concave game. Zero-sum games can be solved using no-regret learning dynamics, and the standard approach leads to a rate of O(1/T). But we are able to show that the game can be solved at a rate of O(1/T^2), extending recent works of RS13,SALS15 by using optimistic learning to speed up equilibrium computation. The optimization algorithm that we can extract from this equilibrium reduction coincides exactly with the well-known N83a method, and indeed the same story allows us to recover several variants of the Nesterov's algorithm via small tweaks. This methodology unifies a number of different iterative optimization methods: we show that the algorithm is precisely the non-optimistic variant of , and recent prior work already established a similar perspective on AW17,ALLW18.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro