Frank-Wolfe algorithm for DC optimization problem

08/31/2023
by   R. Díaz Millán, et al.
0

In the present paper, we formulate two versions of Frank–Wolfe algorithm or conditional gradient method to solve the DC optimization problem with an adaptive step size. The DC objective function consists of two components; the first is thought to be differentiable with a continuous Lipschitz gradient, while the second is only thought to be convex. The second version is based on the first and employs finite differences to approximate the gradient of the first component of the objective function. In contrast to past formulations that used the curvature/Lipschitz-type constant of the objective function, the step size computed does not require any constant associated with the components. For the first version, we established that the algorithm is well-defined of the algorithm and that every limit point of the generated sequence is a stationary point of the problem. We also introduce the class of weak-star-convex functions and show that, despite the fact that these functions are non-convex in general, the rate of convergence of the first version of the algorithm to minimize these functions is O(1/k). The finite difference used to approximate the gradient in the second version of the Frank-Wolfe algorithm is computed with the step-size adaptively updated using two previous iterations. Unlike previous applications of finite difference in the Frank-Wolfe algorithm, which provided approximate gradients with absolute error, the one used here provides us with a relative error, simplifying the algorithm analysis. In this case, we show that all limit points of the generated sequence for the second version of the Frank-Wolfe algorithm are stationary points for the problem under consideration, and we establish that the rate of convergence for the duality gap is O(1/√(k)).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization

Although ADAM is a very popular algorithm for optimizing the weights of ...
research
08/05/2019

Extending the step-size restriction for gradient descent to avoid strict saddle points

We provide larger step-size restrictions for which gradient descent base...
research
12/14/2018

The Boosted DC Algorithm for nonsmooth functions

The Boosted Difference of Convex functions Algorithm (BDCA) was recently...
research
04/30/2021

A Refined Inertial DCA for DC Programming

We consider the difference-of-convex (DC) programming problems whose obj...
research
07/22/2020

Examples of pathological dynamics of the subgradient method for Lipschitz path-differentiable functions

We show that the vanishing stepsize subgradient method – widely adopted ...
research
06/09/2017

Global Convergence of the (1+1) Evolution Strategy

We establish global convergence of the (1+1)-ES algorithm, i.e., converg...
research
03/28/2023

Convergence of Momentum-Based Heavy Ball Method with Batch Updating and/or Approximate Gradients

In this paper, we study the well-known "Heavy Ball" method for convex an...

Please sign up or login with your details

Forgot password? Click here to reset