Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM

07/20/2022
by   Chulin Xie, et al.
4

Federated learning (FL) enables distributed devices to jointly train a shared model while keeping the training data local. Different from the horizontal FL (HFL) setting where each client has partial data samples, vertical FL (VFL), which allows each client to collect partial features, has attracted intensive research efforts recently. In this paper, we identified two challenges that state-of-the-art VFL frameworks are facing: (1) some works directly average the learned feature embeddings and therefore might lose the unique properties of each local feature set; (2) server needs to communicate gradients with the clients for each training step, incurring high communication cost that leads to rapid consumption of privacy budgets. In this paper, we aim to address the above challenges and propose an efficient VFL with multiple linear heads (VIM) framework, where each head corresponds to local clients by taking the separate contribution of each client into account. In addition, we propose an Alternating Direction Method of Multipliers (ADMM)-based method to solve our optimization problem, which reduces the communication cost by allowing multiple local updates in each step, and thus leads to better performance under differential privacy. We consider various settings including VFL with model splitting and without model splitting. For both settings, we carefully analyze the differential privacy mechanism for our framework. Moreover, we show that a byproduct of our framework is that the weights of learned linear heads reflect the importance of local clients. We conduct extensive evaluations and show that on four real-world datasets, VIM achieves significantly higher performance and faster convergence compared with state-of-the-arts. We also explicitly evaluate the importance of local clients and show that VIM enables functionalities such as client-level explanation and client denoising.

READ FULL TEXT

page 10

page 19

research
07/12/2020

VAFL: a Method of Vertical Asynchronous Federated Learning

Horizontal Federated learning (FL) handles multi-client data that share ...
research
07/20/2022

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

Federated learning (FL) has recently attracted increasing attention from...
research
01/17/2023

FedCliP: Federated Learning with Client Pruning

The prevalent communication efficient federated learning (FL) frameworks...
research
07/25/2023

Blockchain-based Optimized Client Selection and Privacy Preserved Framework for Federated Learning

Federated learning is a distributed mechanism that trained large-scale n...
research
05/31/2021

Unifying Distillation with Personalization in Federated Learning

Federated learning (FL) is a decentralized privacy-preserving learning t...
research
02/24/2023

Subspace based Federated Unlearning

Federated learning (FL) enables multiple clients to train a machine lear...
research
10/04/2022

OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization

Vertical Federated Learning (FL) is a new paradigm that enables users wi...

Please sign up or login with your details

Forgot password? Click here to reset