Optimization Strategies in Multi-Task Learning: Averaged or Separated Losses?

09/21/2021
by   Lucas Pascal, et al.
0

In Multi-Task Learning (MTL), it is a common practice to train multi-task networks by optimizing an objective function, which is a weighted average of the task-specific objective functions. Although the computational advantages of this strategy are clear, the complexity of the resulting loss landscape has not been studied in the literature. Arguably, its optimization may be more difficult than a separate optimization of the constituting task-specific objectives. In this work, we investigate the benefits of such an alternative, by alternating independent gradient descent steps on the different task-specific objective functions and we formulate a novel way to combine this approach with state-of-the-art optimizers. As the separation of task-specific objectives comes at the cost of increased computational time, we propose a random task grouping as a trade-off between better optimization and computational efficiency. Experimental results over three well-known visual MTL datasets show better overall absolute performance on losses and standard metrics compared to an averaged objective function and other state-of-the-art MTL methods. In particular, our method shows the most benefits when dealing with tasks of different nature and it enables a wider exploration of the shared parameter space. We also show that our random grouping strategy allows to trade-off between these benefits and computational efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2020

Knowledge Distillation for Multi-task Learning

Multi-task learning (MTL) is to learn one single model that performs mul...
research
03/23/2020

Learned Weight Sharing for Deep Multi-Task Learning by Natural Evolution Strategy and Stochastic Gradient Descent

In deep multi-task learning, weights of task-specific networks are share...
research
04/15/2019

MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning

Multi-task learning is commonly used in autonomous driving for solving v...
research
02/12/2020

A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

In multi-task learning, difficulty levels of different tasks are varying...
research
04/06/2020

A Generalized Multi-Task Learning Approach to Stereo DSM Filtering in Urban Areas

City models and height maps of urban areas serve as a valuable data sour...
research
03/20/2021

Efficient Global Optimization of Non-differentiable, Symmetric Objectives for Multi Camera Placement

We propose a novel iterative method for optimally placing and orienting ...
research
09/23/2022

Do Current Multi-Task Optimization Methods in Deep Learning Even Help?

Recent research has proposed a series of specialized optimization algori...

Please sign up or login with your details

Forgot password? Click here to reset