Sampling-Based Techniques for Training Deep Neural Networks with Limited Computational Resources: A Scalability Evaluation

06/15/2023
by   Sana Ebrahimi, et al.
0

Deep neural networks are superior to shallow networks in learning complex representations. As such, there is a fast-growing interest in utilizing them in large-scale settings. The training process of neural networks is already known to be time-consuming, and having a deep architecture only aggravates the issue. This process consists mostly of matrix operations, among which matrix multiplication is the bottleneck. Several sampling-based techniques have been proposed for speeding up the training time of deep neural networks by approximating the matrix products. These techniques fall under two categories: (i) sampling a subset of nodes in every hidden layer as active at every iteration and (ii) sampling a subset of nodes from the previous layer to approximate the current layer's activations using the edges from the sampled nodes. In both cases, the matrix products are computed using only the selected samples. In this paper, we evaluate the scalability of these approaches on CPU machines with limited computational resources. Making a connection between the two research directions as special cases of approximating matrix multiplications in the context of neural networks, we provide a negative theoretical analysis that shows feedforward approximation is an obstacle against scalability. We conduct comprehensive experimental evaluations that demonstrate the most pressing challenges and limitations associated with the studied approaches. We observe that the hashing-based node selection method is not scalable to a large number of layers, confirming our theoretical analysis. Finally, we identify directions for future research.

READ FULL TEXT
research
11/14/2015

Efficient Training of Very Deep Neural Networks for Supervised Hashing

In this paper, we propose training very deep neural networks (DNNs) for ...
research
03/02/2021

Sparse Training Theory for Scalable and Efficient Agents

A fundamental task for artificial intelligence is learning. Deep Neural ...
research
06/09/2021

Scaling Up Graph Neural Networks Via Graph Coarsening

Scalability of graph neural networks remains one of the major challenges...
research
02/28/2017

Deep Forest: Towards An Alternative to Deep Neural Networks

In this paper, we propose gcForest, a decision tree ensemble approach wi...
research
10/02/2019

Accelerating Data Loading in Deep Neural Network Training

Data loading can dominate deep neural network training time on large-sca...
research
08/26/2021

Scalable and Modular Robustness Analysis of Deep Neural Networks

As neural networks are trained to be deeper and larger, the scalability ...
research
02/26/2016

Scalable and Sustainable Deep Learning via Randomized Hashing

Current deep learning architectures are growing larger in order to learn...

Please sign up or login with your details

Forgot password? Click here to reset