No quantum speedup over gradient descent for non-smooth convex optimization

10/05/2020
by   Ankit Garg, et al.
0

We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function f:ℝ^n →ℝ and its (sub)gradient. Our goal is to find an ϵ-approximate minimum of f starting from a point that is distance at most R from the true minimum. If f is G-Lipschitz, then the classic gradient descent algorithm solves this problem with O((GR/ϵ)^2) queries. Importantly, the number of queries is independent of the dimension n and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension n. In this paper we reprove the randomized lower bound of Ω((GR/ϵ)^2) using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using O(GR/ϵ) quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need Ω((GR/ϵ)^2) queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro