Direction-oriented Multi-objective Learning: Simple and Provable Stochastic Algorithms

05/28/2023
by   Peiyao Xiao, et al.
0

Multi-objective optimization (MOO) has become an influential framework in many machine learning problems with multiple objectives such as learning with multiple criteria and multi-task learning (MTL). In this paper, we propose a new direction-oriented multi-objective problem by regularizing the common descent direction within a neighborhood of a direction that optimizes a linear combination of objectives such as the average loss in MTL. This formulation includes GD and MGDA as special cases, enjoys the direction-oriented benefit as in CAGrad, and facilitates the design of stochastic algorithms. To solve this problem, we propose Stochastic Direction-oriented Multi-objective Gradient descent (SDMGrad) with simple SGD type of updates, and its variant SDMGrad-OS with an efficient objective sampling in the setting where the number of objectives is large. For a constant-level regularization parameter λ, we show that SDMGrad and SDMGrad-OS provably converge to a Pareto stationary point with improved complexities and milder assumptions. For an increasing λ, this convergent point reduces to a stationary point of the linear combination of objectives. We demonstrate the superior performance of the proposed methods in a series of tasks on multi-task supervised learning and reinforcement learning. Code is provided at https://github.com/ml-opt-lab/sdmgrad.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2021

Conflict-Averse Gradient Descent for Multi-task Learning

The goal of multi-task learning is to enable more efficient learning tha...
research
10/14/2021

Multi-task problems are not multi-objective

Multi-objective optimization (MOO) aims at finding a set of optimal conf...
research
10/23/2022

Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Stochastic Approach

Machine learning problems with multiple objective functions appear eithe...
research
08/23/2023

Multi-Objective Optimization for Sparse Deep Neural Network Training

Different conflicting optimization criteria arise naturally in various D...
research
06/04/2022

Stochastic Multiple Target Sampling Gradient Descent

Sampling from an unnormalized target distribution is an essential proble...
research
12/09/2019

Multi-Gradient Descent for Multi-Objective Recommender Systems

Recommender systems need to mirror the complexity of the environment the...
research
01/24/2019

Multi-objective training of Generative Adversarial Networks with multiple discriminators

Recent literature has demonstrated promising results for training Genera...

Please sign up or login with your details

Forgot password? Click here to reset