Stochastic Difference-of-Convex Algorithms for Solving nonconvex optimization problems

11/11/2019
by   Le Thi Hoai An, et al.
0

The paper deals with stochastic difference-of-convex functions programs, that is, optimization problems whose cost function is a sum of a lower semicontinuous difference-of-convex function and the expectation of a stochastic difference-of-convex function with respect to a probability distribution. This class of nonsmooth and nonconvex stochastic optimization problems plays a central role in many practical applications. While in the literature there are many contributions dealing with convex and/or smooth stochastic optimizations problems, there is still a few algorithms dealing with nonconvex and nonsmooth programs. In deterministic optimization literature, the Difference-of-Convex functions Algorithm (DCA) is recognized to be one of a few algorithms to solve effectively nonconvex and nonsmooth optimization problems. The main purpose of this paper is to present some new stochastic variants of DCA for solving stochastic difference-of-convex functions programs. The convergence analysis of the proposed algorithms are carefully studied.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro