DeepAI AI Chat
Log In Sign Up

Exploring Low Rank Training of Deep Neural Networks

Training deep neural networks in low rank, i.e. with factorised layers, is of particular interest to the community: it offers efficiency over unfactorised training in terms of both memory consumption and training time. Prior work has focused on low rank approximations of pre-trained networks and training in low rank space with additional objectives, offering various ad hoc explanations for chosen practice. We analyse techniques that work well in practice, and through extensive ablations on models such as GPT2 we provide evidence falsifying common beliefs in the field, hinting in the process at exciting research opportunities that still need answering.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/24/2020

Low-rank Gradient Approximation For Memory-Efficient On-device Training of Deep Neural Network

Training machine learning models on mobile devices has the potential of ...
09/08/2020

Low-Rank Training of Deep Neural Networks for Emerging Memory Technology

The recent success of neural networks for solving difficult decision tal...
07/11/2023

Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

Despite the dominance and effectiveness of scaling, resulting in large n...
06/16/2021

Simultaneous Training of Partially Masked Neural Networks

For deploying deep learning models to lower end devices, it is necessary...
06/13/2022

Rank Diminishing in Deep Neural Networks

The rank of neural networks measures information flowing across layers. ...
05/30/2019

On the Effectiveness of Low-rank Approximations for Collaborative Filtering compared to Neural Networks

Even in times of deep learning, low-rank approximations by factorizing a...
05/25/2023

Sharpness-Aware Minimization Leads to Low-Rank Features

Sharpness-aware minimization (SAM) is a recently proposed method that mi...