Asymptotic Convergence Rate and Statistical Inference for Stochastic Sequential Quadratic Programming

05/27/2022
by   Sen Na, et al.
0

We apply a stochastic sequential quadratic programming (StoSQP) algorithm to solve constrained nonlinear optimization problems, where the objective is stochastic and the constraints are deterministic. We study a fully stochastic setup, where only a single sample is available in each iteration for estimating the gradient and Hessian of the objective. We allow StoSQP to select a random stepsize α̅_t adaptively, such that β_t≤α̅_t ≤β_t+χ_t, where β_t, χ_t=o(β_t) are prespecified deterministic sequences. We also allow StoSQP to solve Newton system inexactly via randomized iterative solvers, e.g., with the sketch-and-project method; and we do not require the approximation error of inexact Newton direction to vanish. For this general StoSQP framework, we establish the asymptotic convergence rate for its last iterate, with the worst-case iteration complexity as a byproduct; and we perform statistical inference. In particular, with proper decaying β_t,χ_t, we show that: (i) the StoSQP scheme can take at most O(1/ϵ^4) iterations to achieve ϵ-stationarity; (ii) asymptotically and almost surely, (x_t -x^⋆, λ_t - λ^⋆) = O(√(β_tlog(1/β_t)))+O(χ_t/β_t), where (x_t,λ_t) is the primal-dual StoSQP iterate; (iii) the sequence 1/√(β_t)· (x_t -x^⋆, λ_t - λ^⋆) converges to a mean zero Gaussian distribution with a nontrivial covariance matrix. Moreover, we establish the Berry-Esseen bound for (x_t, λ_t) to measure quantitatively the convergence of its distribution function. We also provide a practical estimator for the covariance matrix, from which the confidence intervals of (x^⋆, λ^⋆) can be constructed using iterates {(x_t,λ_t)}_t. Our theorems are validated using nonlinear problems in CUTEst test set.

READ FULL TEXT

page 26

page 27

research
06/03/2023

Online Bootstrap Inference with Nonconvex Stochastic Gradient Descent Estimator

In this paper, we investigate the theoretical properties of stochastic g...
research
02/05/2021

Online Statistical Inference for Gradient-free Stochastic Optimization

As gradient-free stochastic optimization gains emerging attention for a ...
research
05/15/2021

Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality

We propose a randomized algorithm with quadratic convergence rate for co...
research
09/23/2021

Inequality Constrained Stochastic Nonlinear Optimization via Active-Set Sequential Quadratic Programming

We study nonlinear optimization problems with stochastic objective and d...
research
11/05/2021

Locally Feasibly Projected Sequential Quadratic Programming for Nonlinear Programming on Arbitrary Smooth Constraint Manifolds

High-dimensional nonlinear optimization problems subject to nonlinear co...
research
05/22/2023

Sketch-and-Project Meets Newton Method: Global 𝒪(k^-2) Convergence with Low-Rank Updates

In this paper, we propose the first sketch-and-project Newton method wit...
research
09/17/2018

Zap Meets Momentum: Stochastic Approximation Algorithms with Optimal Convergence Rate

There are two well known Stochastic Approximation techniques that are kn...

Please sign up or login with your details

Forgot password? Click here to reset