Adaptive Data Fusion for Multi-task Non-smooth Optimization

10/22/2022
by   Henry Lam, et al.
0

We study the problem of multi-task non-smooth optimization that arises ubiquitously in statistical learning, decision-making and risk management. We develop a data fusion approach that adaptively leverages commonalities among a large number of objectives to improve sample efficiency while tackling their unknown heterogeneities. We provide sharp statistical guarantees for our approach. Numerical experiments on both synthetic and real data demonstrate significant advantages of our approach over benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2022

Adaptive and Robust Multi-task Learning

We study the multi-task learning problem that aims to simultaneously ana...
research
10/09/2021

Multi-task learning on the edge: cost-efficiency and theoretical optimality

This article proposes a distributed multi-task learning (MTL) algorithm ...
research
06/16/2023

Fairness in Multi-Task Learning via Wasserstein Barycenters

Algorithmic Fairness is an established field in machine learning that ai...
research
08/20/2020

No-regret Algorithms for Multi-task Bayesian Optimization

We consider multi-objective optimization (MOO) of an unknown vector-valu...
research
12/27/2020

Doubly Stochastic Generative Arrivals Modeling

We propose a new framework named DS-WGAN that integrates the doubly stoc...
research
12/20/2018

Decentralized Decision-Making Over Multi-Task Networks

In important applications involving multi-task networks with multiple ob...
research
07/17/2022

Fast Composite Optimization and Statistical Recovery in Federated Learning

As a prevalent distributed learning paradigm, Federated Learning (FL) tr...

Please sign up or login with your details

Forgot password? Click here to reset