Nonlinear Least Squares for Large-Scale Machine Learning using Stochastic Jacobian Estimates

07/12/2021
by   Johannes J. Brust, et al.
0

For large nonlinear least squares loss functions in machine learning we exploit the property that the number of model parameters typically exceeds the data in one batch. This implies a low-rank structure in the Hessian of the loss, which enables effective means to compute search directions. Using this property, we develop two algorithms that estimate Jacobian matrices and perform well when compared to state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2021

Stochastic Mirror Descent for Low-Rank Tensor Decomposition Under Non-Euclidean Losses

This work considers low-rank canonical polyadic decomposition (CPD) unde...
research
05/27/2020

PNKH-B: A Projected Newton-Krylov Method for Large-Scale Bound-Constrained Optimization

We present PNKH-B, a projected Newton-Krylov method with a low-rank appr...
research
05/11/2023

Adaptive Graduated Nonconvexity Loss

Many problems in robotics, such as estimating the state from noisy senso...
research
07/26/2020

Scalable Derivative-Free Optimization for Nonlinear Least-Squares Problems

Derivative-free - or zeroth-order - optimization (DFO) has gained recent...
research
03/30/2023

A Note On Nonlinear Regression Under L2 Loss

We investigate the nonlinear regression problem under L2 loss (square lo...
research
05/10/2021

Search Algorithms and Loss Functions for Bayesian Clustering

We propose a randomized greedy search algorithm to find a point estimate...

Please sign up or login with your details

Forgot password? Click here to reset