Resource-Constrained On-Device Learning by Dynamic Averaging

09/25/2020
by   Lukas Heppe, et al.
32

The communication between data-generating devices is partially responsible for a growing portion of the world's power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Furthermore, not centralizing the data protects privacy-sensitive data. However, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient distributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power processors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consumption, this reduces the required energy for solving the machine learning task by a significant amount.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2018

On-Demand TDMA for Energy Efficient Data Collection with LoRa and Wake-up Receiver

Low-power and long-range communication tech- nologies such as LoRa are b...
research
12/05/2022

Low Power Mesh Algorithms for Image Problems

We analyze a physically motivated fine-grained mesh-connected computer m...
research
08/29/2019

A Machine Learning Accelerator In-Memory for Energy Harvesting

There is increasing demand to bring machine learning capabilities to low...
research
06/08/2020

SEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices

One of the fundamental challenges for running machine learning algorithm...
research
06/19/2018

COUNTDOWN - three, two, one, low power! A Run-time Library for Energy Saving in MPI Communication Primitives

Power consumption is a looming treat in today's computing progress. In s...
research
02/08/2023

Masking Kernel for Learning Energy-Efficient Speech Representation

Modern smartphones are equipped with powerful audio hardware and process...
research
01/10/2023

Enabling Listening Suspension in the Time Slotted Channel Hopping Protocol

Time slotted channel hopping provides reliable and deterministic communi...

Please sign up or login with your details

Forgot password? Click here to reset