Enabling Incremental Training with Forward Pass for Edge Devices

03/25/2021
by   Dana AbdulQader, et al.
0

Deep Neural Networks (DNNs) are commonly deployed on end devices that exist in constantly changing environments. In order for the system to maintain it's accuracy, it is critical that it is able to adapt to changes and recover by retraining parts of the network. However, end devices have limited resources making it challenging to train on the same device. Moreover, training deep neural networks is both memory and compute intensive due to the backpropagation algorithm. In this paper we introduce a method using evolutionary strategy (ES) that can partially retrain the network enabling it to adapt to changes and recover after an error has occurred. This technique enables training on an inference-only hardware without the need to use backpropagation and with minimal resource overhead. We demonstrate the ability of our technique to retrain a quantized MNIST neural network after injecting noise to the input. Furthermore, we present the micro-architecture required to enable training on HLS4ML (an inference hardware architecture) and implement it in Verilog. We synthesize our implementation for a Xilinx Kintex Ultrascale Field Programmable Gate Array (FPGA) resulting in less than 1 implement the incremental training.

READ FULL TEXT
research
11/14/2019

An Efficient Hardware-Oriented Dropout Algorithm

This paper proposes a hardware-oriented dropout algorithm, which is effi...
research
12/29/2019

MTJ-Based Hardware Synapse Design for Quantized Deep Neural Networks

Quantized neural networks (QNNs) are being actively researched as a solu...
research
04/13/2020

Enabling Incremental Knowledge Transfer for Object Detection at the Edge

Object detection using deep neural networks (DNNs) involves a huge amoun...
research
02/02/2023

Bayesian Inference on Binary Spiking Networks Leveraging Nanoscale Device Stochasticity

Bayesian Neural Networks (BNNs) can overcome the problem of overconfiden...
research
11/03/2017

Accelerating Training of Deep Neural Networks via Sparse Edge Processing

We propose a reconfigurable hardware architecture for deep neural networ...
research
06/15/2017

Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks

Artificial neural networks (ANNs) trained using backpropagation are powe...
research
12/19/2022

XEngine: Optimal Tensor Rematerialization for Neural Networks in Heterogeneous Environments

Memory efficiency is crucial in training deep learning networks on resou...

Please sign up or login with your details

Forgot password? Click here to reset