Efficient ConvNets for Analog Arrays

07/03/2018
by   Malte J. Rasch, et al.
0

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning. Their main advantage is that they compute matrix-vector products in constant time, irrespective of the size of the matrix. However, early convolution layers in ConvNets map very unfavorably onto analog arrays, because kernel matrices are typically small and the constant time operation needs to be sequentially iterated a large number of times, reducing the speed up advantage for ConvNets. Here, we propose to replicate the kernel matrix of a convolution layer on distinct analog arrays, and randomly divide parts of the compute among them, so that multiple kernel matrices are trained in parallel. With this modification, analog arrays execute ConvNets with an acceleration factor that is proportional to the number of kernel matrices used per layer (here tested 16-128). Despite having more free parameters, we show analytically and in numerical experiments that this convolution architecture is self-regularizing and implicitly learns similar filters across arrays. We also report superior performance on a number of datasets and increased robustness to adversarial attacks. Our investigation suggests to revise the notion that mixed analog-digital hardware is not suitable for ConvNets.

READ FULL TEXT
research
06/06/2019

Training large-scale ANNs on simulated resistive crossbar arrays

Accelerating training of artificial neural networks (ANN) with analog re...
research
11/10/2016

Advancing Memristive Analog Neuromorphic Networks: Increasing Complexity, and Coping with Imperfect Hardware Components

We experimentally demonstrate classification of 4x4 binary images into 4...
research
09/21/2023

A BEOL Compatible, 2-Terminals, Ferroelectric Analog Non-Volatile Memory

A Ferroelectric Analog Non-Volatile Memory based on a WOx electrode and ...
research
08/27/2020

Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

The ever-increasing computational demand of Deep Learning has propelled ...
research
09/21/2023

A Back-End-Of-Line Compatible, Ferroelectric Analog Non-Volatile Memory

A Ferroelectric Analog Non-Volatile Memory based on a WOx electrode and ...
research
07/07/2023

Memory-Immersed Collaborative Digitization for Area-Efficient Compute-in-Memory Deep Learning

This work discusses memory-immersed collaborative digitization among com...

Please sign up or login with your details

Forgot password? Click here to reset