On the ERM Principle with Networked Data

11/12/2017
by   Yuanhong Wang, et al.
0

Networked data, in which every training example involves two objects and may share some common objects with others, is used in many machine learning tasks such as learning to rank and link prediction. A challenge of learning from networked examples is that target values are not known for some pairs of objects. In this case, neither the classical i.i.d. assumption nor techniques based on complete U-statistics can be used. Most existing theoretical results of this problem only deal with the classical empirical risk minimization (ERM) principle that always weights every example equally, but this strategy leads to unsatisfactory bounds. We consider general weighted ERM and show new universal risk bounds for this problem. These new bounds naturally define an optimization problem which leads to appropriate weights for networked examples. Though this optimization problem is not convex in general, we devise a new fully polynomial-time approximation scheme (FPTAS) to solve it.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2014

Learning from networked examples

Many machine learning algorithms are based on the assumption that traini...
research
11/11/2018

Generalization Bounds for Vicinal Risk Minimization Principle

The vicinal risk minimization (VRM) principle, first proposed by vapnik1...
research
09/16/2019

A Weighted ℓ_1-Minimization Approach For Wavelet Reconstruction of Signals and Images

In this effort, we propose a convex optimization approach based on weigh...
research
05/23/2018

Efficient online algorithms for fast-rate regret bounds under sparsity

We consider the online convex optimization problem. In the setting of ar...
research
11/21/2017

Alternating minimization, scaling algorithms, and the null-cone problem from invariant theory

Alternating minimization heuristics seek to solve a (difficult) global o...
research
05/07/2018

Computing the Shattering Coefficient of Supervised Learning Algorithms

The Statistical Learning Theory (SLT) provides the theoretical guarantee...
research
08/20/2018

The Mismatch Principle: Statistical Learning Under Large Model Uncertainties

We study the learning capacity of empirical risk minimization with regar...

Please sign up or login with your details

Forgot password? Click here to reset