Surrogate Functions for Maximizing Precision at the Top

05/26/2015
by   Purushottam Kar, et al.
0

The problem of maximizing precision at the top of a ranked list, often dubbed Precision@k (prec@k), finds relevance in myriad learning applications such as ranking, multi-label classification, and learning with severe label imbalance. However, despite its popularity, there exist significant gaps in our understanding of this problem and its associated performance measure. The most notable of these is the lack of a convex upper bounding surrogate for prec@k. We also lack scalable perceptron and stochastic gradient descent algorithms for optimizing this performance measure. In this paper we make key contributions in these directions. At the heart of our results is a family of truly upper bounding surrogates for prec@k. These surrogates are motivated in a principled manner and enjoy attractive properties such as consistency to prec@k under various natural margin/noise conditions. These surrogates are then used to design a class of novel perceptron algorithms for optimizing prec@k with provable mistake bounds. We also devise scalable stochastic gradient descent style methods for this problem with provable convergence bounds. Our proofs rely on novel uniform convergence bounds which require an in-depth analysis of the structural properties of prec@k and its surrogates. We conclude with experimental results comparing our algorithms with state-of-the-art cutting plane and stochastic gradient algorithms for maximizing prec@k.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2014

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

Modern applications in sensitive domains such as biometrics and medicine...
research
04/03/2022

Understanding the unstable convergence of gradient descent

Most existing analyses of (stochastic) gradient descent rely on the cond...
research
02/24/2022

Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence

NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ran...
research
06/11/2020

A General Framework for Analyzing Stochastic Dynamics in Learning Algorithms

We present a general framework for analyzing high-probability bounds for...
research
05/03/2014

Perceptron-like Algorithms and Generalization Bounds for Learning to Rank

Learning to rank is a supervised learning problem where the output space...
research
11/03/2020

SGB: Stochastic Gradient Bound Method for Optimizing Partition Functions

This paper addresses the problem of optimizing partition functions in a ...
research
05/26/2015

Optimizing Non-decomposable Performance Measures: A Tale of Two Classes

Modern classification problems frequently present mild to severe label i...

Please sign up or login with your details

Forgot password? Click here to reset