Deep Partial Updating

07/06/2020
by   Zhongnan Qu, et al.
3

Emerging edge intelligence applications require the server to continuously retrain and update deep neural networks deployed on remote edge nodes in order to leverage newly collected data samples. Unfortunately, it may be impossible in practice to continuously send fully updated weights to these edge nodes due to the highly constrained communication resource. In this paper, we propose the weight-wise deep partial updating paradigm, which smartly selects only a subset of weights to update at each server-to-edge communication round, while achieving a similar performance compared to full updating. Our method is established through analytically upper-bounding the loss difference between partial updating and full updating, and only updates the weights which make the largest contributions to the upper bound. Extensive experimental results demonstrate the efficacy of our partial updating methodology which achieves a high inference accuracy while updating a rather small number of weights.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2020

Weight Update Skipping: Reducing Training Time for Artificial Neural Networks

Artificial Neural Networks (ANNs) are known as state-of-the-art techniqu...
research
08/31/2023

Edge-Assisted On-Device Model Update for Video Analytics in Adverse Environments

While large deep neural networks excel at general video analytics tasks,...
research
05/06/2021

Coded Gradient Aggregation: A Tradeoff Between Communication Costs at Edge Nodes and at Helper Nodes

The increasing amount of data generated at the edge/client nodes and the...
research
09/19/2023

Toward efficient resource utilization at edge nodes in federated learning

Federated learning (FL) enables edge nodes to collaboratively contribute...
research
03/09/2022

Update Compression for Deep Neural Networks on the Edge

An increasing number of artificial intelligence (AI) applications involv...
research
06/14/2019

Distributed Optimization for Over-Parameterized Learning

Distributed optimization often consists of two updating phases: local op...
research
10/26/2018

Online learning using multiple times weight updating

Online learning makes sequence of decisions with partial data arrival wh...

Please sign up or login with your details

Forgot password? Click here to reset