Scalable and Sustainable Deep Learning via Randomized Hashing

02/26/2016
by   Ryan Spring, et al.
0

Current deep learning architectures are growing larger in order to learn from complex datasets. These architectures require giant matrix multiplication operations to train millions of parameters. Conversely, there is another growing trend to bring deep learning to low-power, embedded devices. The matrix operations, associated with both training and testing of deep networks, are very expensive from a computational and energy standpoint. We present a novel hashing based technique to drastically reduce the amount of computation needed to train and test deep networks. Our approach combines recent ideas from adaptive dropouts and randomized hashing for maximum inner product search to select the nodes with the highest activation efficiently. Our new algorithm for deep learning reduces the overall computational cost of forward and back-propagation by operating on significantly fewer (sparse) nodes. As a consequence, our algorithm uses only 5 keeping on average within 1 property of the proposed hashing based back-propagation is that the updates are always sparse. Due to the sparse gradient updates, our algorithm is ideally suited for asynchronous and parallel training leading to near linear speedup with increasing number of cores. We demonstrate the scalability and sustainability (energy efficiency) of our proposed algorithm via rigorous experimental evaluations on several real datasets.

READ FULL TEXT
research
04/24/2022

RedMulE: A Compact FP16 Matrix-Multiplication Accelerator for Adaptive Deep Learning on RISC-V-Based Ultra-Low-Power SoCs

The fast proliferation of extreme-edge applications using Deep Learning ...
research
04/28/2015

Speeding Up Neural Networks for Large Scale Classification using WTA Hashing

In this paper we propose to use the Winner Takes All hashing technique t...
research
08/12/2016

Deep Hashing: A Joint Approach for Image Signature Learning

Similarity-based image hashing represents crucial technique for visual d...
research
12/08/2022

Approximations in Deep Learning

The design and implementation of Deep Learning (DL) models is currently ...
research
06/15/2023

Sampling-Based Techniques for Training Deep Neural Networks with Limited Computational Resources: A Scalability Evaluation

Deep neural networks are superior to shallow networks in learning comple...
research
04/04/2019

Regularizing Activation Distribution for Training Binarized Deep Networks

Binarized Neural Networks (BNNs) can significantly reduce the inference ...

Please sign up or login with your details

Forgot password? Click here to reset