Dictionary descent in optimization

11/04/2015
by   Vladimir Temlyakov, et al.
0

The problem of convex optimization is studied. Usually in convex optimization the minimization is over a d-dimensional domain. Very often the convergence rate of an optimization algorithm depends on the dimension d. The algorithms studied in this paper utilize dictionaries instead of a canonical basis used in the coordinate descent algorithms. We show how this approach allows us to reduce dimensionality of the problem. Also, we investigate which properties of a dictionary are beneficial for the convergence rate of typical greedy-type algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2014

Convergence and rate of convergence of some greedy algorithms in convex optimization

The paper gives a systematic study of the approximate versions of three ...
research
01/01/2014

Convex optimization on Banach Spaces

Greedy algorithms which use only function evaluations are applied to con...
research
11/18/2022

Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes

Optimal transport has arisen as an important tool in machine learning, a...
research
12/04/2013

Chebushev Greedy Algorithm in convex optimization

Chebyshev Greedy Algorithm is a generalization of the well known Orthogo...
research
01/15/2020

Biorthogonal greedy algorithms in convex optimization

The study of greedy approximation in the context of convex optimization ...
research
05/15/2015

Algorithmic Connections Between Active Learning and Stochastic Convex Optimization

Interesting theoretical associations have been established by recent pap...
research
05/08/2023

Distributed Detection over Blockchain-aided Internet of Things in the Presence of Attacks

Distributed detection over a blockchain-aided Internet of Things (BIoT) ...

Please sign up or login with your details

Forgot password? Click here to reset