ADDS: Adaptive Differentiable Sampling for Robust Multi-Party Learning

10/29/2021
by   Maoguo Gong, et al.
7

Distributed multi-party learning provides an effective approach for training a joint model with scattered data under legal and practical constraints. However, due to the quagmire of a skewed distribution of data labels across participants and the computation bottleneck of local devices, how to build smaller customized models for clients in various scenarios while providing updates appliable to the central model remains a challenge. In this paper, we propose a novel adaptive differentiable sampling framework (ADDS) for robust and communication-efficient multi-party learning. Inspired by the idea of dropout in neural networks, we introduce a network sampling strategy in the multi-party setting, which distributes different subnets of the central model to clients for updating, and the differentiable sampling rates allow each client to extract optimal local architecture from the supernet according to its private data distribution. The approach requires minimal modifications to the existing multi-party learning structure, and it is capable of integrating local updates of all subnets into the supernet, improving the robustness of the central model. The proposed framework significantly reduces local computation and communication costs while speeding up the central model convergence, as we demonstrated through experiments on real-world datasets.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 8

page 12

page 13

research
04/14/2021

Towards Explainable Multi-Party Learning: A Contrastive Knowledge Sharing Framework

Multi-party learning provides solutions for training joint models with d...
research
07/30/2020

Communication-Efficient Federated Learning via Optimal Client Sampling

Federated learning is a private and efficient framework for learning mod...
research
05/05/2022

Over-The-Air Federated Learning under Byzantine Attacks

Federated learning (FL) is a promising solution to enable many AI applic...
research
03/28/2023

Fast Convergent Federated Learning with Aggregated Gradients

Federated Learning (FL) is a novel machine learning framework, which ena...
research
04/14/2021

Multi-Party Dual Learning

The performance of machine learning algorithms heavily relies on the ava...
research
04/25/2023

Mobilizing Personalized Federated Learning via Random Walk Stochastic ADMM

In this research, we investigate the barriers associated with implementi...
research
12/07/2018

Reaching Data Confidentiality and Model Accountability on the CalTrain

Distributed collaborative learning (DCL) paradigms enable building joint...

Please sign up or login with your details

Forgot password? Click here to reset