A stochastic linearized proximal method of multipliers for convex stochastic optimization with expectation constraints

06/22/2021
by   Liwei Zhang, et al.
0

This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We present a computable stochastic approximation type algorithm, namely the stochastic linearized proximal method of multipliers, to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits O(K^-1/2) expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where K denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has O(log(K)K^-1/2) constraint violation bound and O(log^3/2(K)K^-1/2) objective bound. Some preliminary numerical results demonstrate the performance of the proposed algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro