Bayesian Regularization: From Tikhonov to Horseshoe

02/17/2019
by   Nicholas G. Polson, et al.
0

Bayesian regularization is a central tool in modern-day statistical and machine learning methods. Many applications involve high-dimensional sparse signal recovery problems. The goal of our paper is to provide a review of the literature on penalty-based regularization approaches, from Tikhonov (Ridge, Lasso) to horseshoe regularization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2017

Sparse Regularization in Marketing and Economics

Sparse alpha-norm regularization has many data-rich applications in mark...
research
11/04/2015

Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future

Regularization and Bayesian methods for system identification have been ...
research
03/13/2018

Takeuchi's Information Criteria as a form of Regularization

Takeuchi's Information Criteria (TIC) is a linearization of maximum like...
research
10/08/2018

Support Localization and the Fisher Metric for off-the-grid Sparse Regularization

Sparse regularization is a central technique for both machine learning (...
research
02/01/2020

Interpreting a Penalty as the Influence of a Bayesian Prior

In machine learning, it is common to optimize the parameters of a probab...
research
06/23/2016

Non-convex regularization in remote sensing

In this paper, we study the effect of different regularizers and their i...
research
04/24/2019

Horseshoe Regularization for Machine Learning in Complex and Deep Models

Since the advent of the horseshoe priors for regularization, global-loca...

Please sign up or login with your details

Forgot password? Click here to reset