Mirror descent in saddle-point problems: Going the extra (gradient) mile

07/07/2018
by   Panayotis Mertikopoulos, et al.
0

Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave problems; however, making theoretical inroads towards efficient GAN training crucially depends on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the widely used mirror descent (MD) method in a class of non-monotone problems - called coherent - whose solutions coincide with those of a naturally associated variational inequality. Our first result is that, under strict coherence (a condition satisfied by all strictly convex-concave problems), MD methods converge globally; however, they may fail to converge even in simple, bilinear models. To mitigate this deficiency, we add on an "extra-gradient" step which we show stabilizes MD methods by looking ahead and using a "future gradient". These theoretical results are subsequently validated by numerical experiments in GANs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset