Natural Gradient for Combined Loss Using Wavelets

06/29/2020
by   Lexing Ying, et al.
0

Natural gradients have been widely used in optimization of loss functionals over probability space, with important examples such as Fisher-Rao gradient descent for Kullback-Leibler divergence, Wasserstein gradient descent for transport-related functionals, and Mahalanobis gradient descent for quadratic loss functionals. This note considers the situation in which the loss is a convex linear combination of these examples. We propose a new natural gradient algorithm by utilizing compactly supported wavelets to diagonalize approximately the Hessian of the combined loss. Numerical results are included to demonstrate the efficiency of the proposed algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2020

Gradient descent algorithms for Bures-Wasserstein barycenters

We study first order methods to compute the barycenter of a probability ...
research
02/25/2023

Achieving High Accuracy with PINNs via Energy Natural Gradients

We propose energy natural gradient descent, a natural gradient method wi...
research
12/22/2017

True Asymptotic Natural Gradient Optimization

We introduce a simple algorithm, True Asymptotic Natural Gradient Optimi...
research
08/02/2022

A Note on Zeroth-Order Optimization on the Simplex

We construct a zeroth-order gradient estimator for a smooth function def...
research
05/08/2012

The Natural Gradient by Analogy to Signal Whitening, and Recipes and Tricks for its Use

The natural gradient allows for more efficient gradient descent by remov...
research
04/08/2020

Mirror Descent Algorithms for Minimizing Interacting Free Energy

This note considers the problem of minimizing interacting free energy. M...
research
05/04/2023

Automatic Prompt Optimization with "Gradient Descent" and Beam Search

Large Language Models (LLMs) have shown impressive performance as genera...

Please sign up or login with your details

Forgot password? Click here to reset