Risk estimation for matrix recovery with spectral regularization

05/07/2012
by   Charles-Alban Deledalle, et al.
0

In this paper, we develop an approach to recursively estimate the quadratic risk for matrix recovery problems regularized with spectral functions. Toward this end, in the spirit of the SURE theory, a key step is to compute the (weak) derivative and divergence of a solution with respect to the observations. As such a solution is not available in closed form, but rather through a proximal splitting algorithm, we propose to recursively compute the divergence from the sequence of iterates. A second challenge that we unlocked is the computation of the (weak) derivative of the proximity operator of a spectral function. To show the potential applicability of our approach, we exemplify it on a matrix completion problem to objectively and automatically select the regularization parameter.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2015

Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization

Bayesian matrix completion has been studied based on a low-rank matrix f...
research
10/06/2018

Adapting to Unknown Noise Distribution in Matrix Denoising

We consider the problem of estimating an unknown matrix ∈^m× n, from obs...
research
09/22/2020

DeepVir – Graphical Deep Matrix Factorization for "In Silico" Antiviral Repositioning: Application to COVID-19

This work formulates antiviral repositioning as a matrix completion prob...
research
05/11/2019

Sparse Optimization Problem with s-difference Regularization

In this paper, a s-difference type regularization for sparse recovery pr...
research
09/24/2013

Solving OSCAR regularization problems by proximal splitting algorithms

The OSCAR (octagonal selection and clustering algorithm for regression) ...
research
06/12/2023

Analysis of the Relative Entropy Asymmetry in the Regularization of Empirical Risk Minimization

The effect of the relative entropy asymmetry is analyzed in the empirica...
research
05/11/2021

Spectral risk-based learning using unbounded losses

In this work, we consider the setting of learning problems under a wide ...

Please sign up or login with your details

Forgot password? Click here to reset