Extensions to the Proximal Distance of Method of Constrained Optimization

09/02/2020
by   Alfonso Landeros, et al.
0

The current paper studies the problem of minimizing a loss f(x) subject to constraints of the form Dx∈ S, where S is a closed set, convex or not, and D is a fusion matrix. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives f(x)+ρ/2dist(Dx,S)^2 involving large tuning constants ρ and the squared Euclidean distance of Dx from S. The next iterate x_n+1 of the corresponding proximal distance algorithm is constructed from the current iterate x_n by minimizing the majorizing surrogate function f(x)+ρ/2Dx-𝒫_S(Dx_n)^2. For fixed ρ and convex f(x) and S, we prove convergence, provide convergence rates, and demonstrate linear convergence under stronger assumptions. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we adapt the alternating direction method of multipliers (ADMM) and compare on extensive numerical tests including problems in metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to one that has a good condition number. Our experiments demonstrate the superior speed and acceptable accuracy of the steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.

READ FULL TEXT
research
11/03/2012

Stochastic ADMM for Nonsmooth Optimization

We present a stochastic setting for optimization problems with nonsmooth...
research
04/22/2021

Converting ADMM to a Proximal Gradient for Convex Optimization Problems

In machine learning and data science, we often consider efficiency for s...
research
06/22/2021

A stochastic linearized proximal method of multipliers for convex stochastic optimization with expectation constraints

This paper considers the problem of minimizing a convex expectation func...
research
07/02/2018

Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization

This paper proposes a first order gradient reinforcement learning algori...
research
11/03/2017

Generalized Linear Model Regression under Distance-to-set Penalties

Estimation in generalized linear models (GLM) is complicated by the pres...
research
01/27/2013

An Extragradient-Based Alternating Direction Method for Convex Minimization

In this paper, we consider the problem of minimizing the sum of two conv...
research
08/21/2023

GBM-based Bregman Proximal Algorithms for Constrained Learning

As the complexity of learning tasks surges, modern machine learning enco...

Please sign up or login with your details

Forgot password? Click here to reset