Single-Shot Pruning for Offline Reinforcement Learning

12/31/2021
by   Samin Yeasar Arnob, et al.
1

Deep Reinforcement Learning (RL) is a powerful framework for solving complex real-world problems. Large neural networks employed in the framework are traditionally associated with better generalization capabilities, but their increased size entails the drawbacks of extensive training duration, substantial hardware resources, and longer inference times. One way to tackle this problem is to prune neural networks leaving only the necessary parameters. State-of-the-art concurrent pruning techniques for imposing sparsity perform demonstrably well in applications where data distributions are fixed. However, they have not yet been substantially explored in the context of RL. We close the gap between RL and single-shot pruning techniques and present a general pruning approach to the Offline RL. We leverage a fixed dataset to prune neural networks before the start of RL training. We then run experiments varying the network sparsity level and evaluating the validity of pruning at initialization techniques in continuous control tasks. Our results show that with 95 network weights pruned, Offline-RL algorithms can still retain performance in the majority of our experiments. To the best of our knowledge, no prior work utilizing pruning in RL retained performance at such high levels of sparsity. Moreover, pruning at initialization techniques can be easily integrated into any existing Offline-RL algorithms without changing the learning objective.

READ FULL TEXT
research
07/09/2020

Learning to Prune Deep Neural Networks via Reinforcement Learning

This paper proposes PuRL - a deep reinforcement learning (RL) based algo...
research
07/16/2021

Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data

Recently, neural network compression schemes like channel pruning have b...
research
01/07/2022

Neural Network Optimization for Reinforcement Learning Tasks Using Sparse Computations

This article proposes a sparse computation-based method for optimizing n...
research
09/08/2022

FORLORN: A Framework for Comparing Offline Methods and Reinforcement Learning for Optimization of RAN Parameters

The growing complexity and capacity demands for mobile networks necessit...
research
10/02/2020

Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization

We study the offline meta-reinforcement learning (OMRL) problem, a parad...
research
06/06/2019

Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP

The lottery ticket hypothesis proposes that over-parameterization of dee...
research
01/20/2020

Memristor Hardware-Friendly Reinforcement Learning

Recently, significant progress has been made in solving sophisticated pr...

Please sign up or login with your details

Forgot password? Click here to reset