Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence

02/24/2022
by   Zi-Hao Qiu, et al.
0

NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-K variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-K NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2021

Stochastic Optimization of Area Under Precision-Recall Curve for Deep Learning with Provable Convergence

Areas under ROC (AUROC) and precision-recall curves (AUPRC) are common m...
research
07/02/2021

Momentum Accelerates the Convergence of Stochastic AUPRC Maximization

In this paper, we study stochastic optimization of areas under precision...
research
08/13/2020

Adversarial Training and Provable Robustness: A Tale of Two Objectives

We propose a principled framework that combines adversarial training and...
research
05/26/2015

Surrogate Functions for Maximizing Precision at the Top

The problem of maximizing precision at the top of a ranked list, often d...
research
05/16/2020

Tiering as a Stochastic Submodular Optimization Problem

Tiering is an essential technique for building large-scale information r...
research
05/14/2023

Provable Multi-instance Deep AUC Maximization with Stochastic Pooling

This paper considers a novel application of deep AUC maximization (DAM) ...
research
08/07/2022

Quantization enabled Privacy Protection in Decentralized Stochastic Optimization

By enabling multiple agents to cooperatively solve a global optimization...

Please sign up or login with your details

Forgot password? Click here to reset