Riemannian Low-Rank Model Compression for Federated Learning with Over-the-Air Aggregation

06/04/2023
by   Ye Xue, et al.
0

Low-rank model compression is a widely used technique for reducing the computational load when training machine learning models. However, existing methods often rely on relaxing the low-rank constraint of the model weights using a regularized nuclear norm penalty, which requires an appropriate hyperparameter that can be difficult to determine in practice. Furthermore, existing compression techniques are not directly applicable to efficient over-the-air (OTA) aggregation in federated learning (FL) systems for distributed Internet-of-Things (IoT) scenarios. In this paper, we propose a novel manifold optimization formulation for low-rank model compression in FL that does not relax the low-rank constraint. Our optimization is conducted directly over the low-rank manifold, guaranteeing that the model is exactly low-rank. We also introduce a consensus penalty in the optimization formulation to support OTA aggregation. Based on our optimization formulation, we propose an alternating Riemannian optimization algorithm with a precoder that enables efficient OTA aggregation of low-rank local models without sacrificing training performance. Additionally, we provide convergence analysis in terms of key system parameters and conduct extensive experiments with real-world datasets to demonstrate the effectiveness of our proposed Riemannian low-rank model compression scheme compared to various state-of-the-art baselines.

READ FULL TEXT

page 1

page 11

research
04/26/2021

Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression

Federated learning (FL) is a promising and powerful approach for trainin...
research
11/29/2021

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization

The underlying assumption of recent federated learning (FL) paradigms is...
research
08/13/2021

FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning

To overcome the burdens on frequent model uploads and downloads during f...
research
02/01/2022

Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?

In this paper, we question the rationale behind propagating large number...
research
05/14/2020

Multilevel Riemannian optimization for low-rank problems

Large-scale optimization problems arising from the discretization of pro...
research
01/20/2023

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

Low-rank compression is an important model compression strategy for obta...
research
05/29/2019

Fast and Robust Rank Aggregation against Model Misspecification

In rank aggregation, preferences from different users are summarized int...

Please sign up or login with your details

Forgot password? Click here to reset