Towards Efficient Neural Networks On-a-chip: Joint Hardware-Algorithm Approaches

05/28/2019
by   Xiaocong Du, et al.
0

Machine learning algorithms have made significant advances in many applications. However, their hardware implementation on the state-of-the-art platforms still faces several challenges and are limited by various factors, such as memory volume, memory bandwidth and interconnection overhead. The adoption of the crossbar architecture with emerging memory technology partially solves the problem but induces process variation and other concerns. In this paper, we will present novel solutions to two fundamental issues in crossbar implementation of Artificial Intelligence (AI) algorithms: device variation and insufficient interconnections. These solutions are inspired by the statistical properties of algorithms themselves, especially the redundancy in neural network nodes and connections. By Random Sparse Adaptation and pruning the connections following the Small-World model, we demonstrate robust and efficient performance on representative datasets such as MNIST and CIFAR-10. Moreover, we present Continuous Growth and Pruning algorithm for future learning and adaptation on hardware.

READ FULL TEXT
research
09/28/2020

Breaking the Memory Wall for AI Chip with a New Dimension

Recent advancements in deep learning have led to the widespread adoption...
research
01/28/2012

Memory Based Machine Intelligence Techniques in VLSI hardware

We briefly introduce the memory based approaches to emulate machine inte...
research
01/18/2023

Tailor: Altering Skip Connections for Resource-Efficient Inference

Deep neural networks use skip connections to improve training convergenc...
research
06/08/2022

Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications

Low-Power Edge-AI capabilities are essential for on-device extended real...
research
06/04/2020

Counting Cards: Exploiting Weight and Variance Distributions for Robust Compute In-Memory

Compute in-memory (CIM) is a promising technique that minimizes data tra...
research
09/22/2021

Neural network relief: a pruning algorithm based on neural activity

Current deep neural networks (DNNs) are overparameterized and use most o...
research
05/27/2019

Efficient Network Construction through Structural Plasticity

Deep Neural Networks (DNNs) on hardware is facing excessive computation ...

Please sign up or login with your details

Forgot password? Click here to reset