Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness

06/16/2023
by   Eric Zelikman, et al.
0

Language model training in distributed settings is limited by the communication cost of gradient exchanges. In this short note, we extend recent work from Malladi et al. (2023), using shared randomness to perform distributed fine-tuning with low bandwidth. The method is a natural decentralized extension of memory-efficient Simultaneous Perturbation Stochastic Approximation (SPSA). Each iteration, each machine seeds a Random Number Generator (RNG) to perform local reproducible perturbations on model weights and calculate and exchange scalar projected gradients, which are then used to update each model. By using a (machine, sample) identifier as the random seed, each model can regenerate one another's perturbations. As machines only exchange single-byte projected gradients, this is highly communication efficient. There are also potential privacy benefits, as projected gradients may be calculated on different training data, and models never access the other's data. Our approach not only drastically reduces communication bandwidth requirements but also accommodates dynamic addition or removal of machines during the training process and retains the memory-efficient and inference-only advantages of recent work. We perform proof-of-concept experiments to demonstrate the potential usefulness of this method, building off of rich literature on distributed optimization and memory-efficient training.

READ FULL TEXT
research
09/28/2022

Neighborhood Gradient Clustering: An Efficient Decentralized Learning Method for Non-IID Data Distributions

Decentralized learning over distributed datasets can have significantly ...
research
01/14/2019

A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks

Distributed synchronous stochastic gradient descent (S-SGD) with data pa...
research
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
research
04/15/2023

SalientGrads: Sparse Models for Communication Efficient and Data Aware Distributed Federated Training

Federated learning (FL) enables the training of a model leveraging decen...
research
10/22/2019

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

Training a machine learning model is both compute and data-intensive. Mo...
research
08/10/2021

PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

Collaborative training of neural networks leverages distributed data by ...

Please sign up or login with your details

Forgot password? Click here to reset