Improved Iteration Complexities for Overconstrained p-Norm Regression
In this paper we obtain improved iteration complexities for solving ℓ_p regression. We provide methods which given any full-rank 𝐀∈ℝ^n × d with n ≥ d, b ∈ℝ^n, and p ≥ 2 solve min_x ∈ℝ^d𝐀 x - b_p to high precision in time dominated by that of solving O_p(d^p-2/3p-2) linear systems in 𝐀^⊤𝐃𝐀 for positive diagonal matrices 𝐃. This improves upon the previous best iteration complexity of O_p(n^p-2/3p-2) (Adil, Kyng, Peng, Sachdeva 2019). As a corollary, we obtain an O(d^1/3ϵ^-2/3) iteration complexity for approximate ℓ_∞ regression. Further, for q ∈ (1, 2] and dual norm q = p/(p-1) we provide an algorithm that solves ℓ_q regression in O(d^p-2/2p-2) iterations. To obtain this result we analyze row reweightings (closely inspired by ℓ_p-norm Lewis weights) which allow a closer connection between ℓ_2 and ℓ_p regression. We provide adaptations of two different iterative optimization frameworks which leverage this connection and yield our results. The first framework is based on iterative refinement and multiplicative weights based width reduction and the second framework is based on highly smooth acceleration. Both approaches yield O_p(d^p-2/3p-2) iteration methods but the second has a polynomial dependence on p (as opposed to the exponential dependence of the first algorithm) and provides a new alternative to the previous state-of-the-art methods for ℓ_p regression for large p.
READ FULL TEXT