Augmented Random Search for Multi-Objective Bayesian Optimization of Neural Networks

05/23/2023
by   Mark Deutel, et al.
0

Deploying Deep Neural Networks (DNNs) on tiny devices is a common trend to process the increasing amount of sensor data being generated. Multi-objective optimization approaches can be used to compress DNNs by applying network pruning and weight quantization to minimize the memory footprint (RAM), the number of parameters (ROM) and the number of floating point operations (FLOPs) while maintaining the predictive accuracy. In this paper, we show that existing multi-objective Bayesian optimization (MOBOpt) approaches can fall short in finding optimal candidates on the Pareto front and propose a novel solver based on an ensemble of competing parametric policies trained using an Augmented Random Search Reinforcement Learning (RL) agent. Our methodology aims at finding feasible tradeoffs between a DNN's predictive accuracy, memory consumption on a given target system, and computational complexity. Our experiments show that we outperform existing MOBOpt approaches consistently on different data sets and architectures such as ResNet-18 and MobileNetV3.

READ FULL TEXT
research
04/08/2020

GeneCAI: Genetic Evolution for Acquiring Compact AI

In the contemporary big data realm, Deep Neural Networks (DNNs) are evol...
research
01/18/2020

FlexiBO: Cost-Aware Multi-Objective Optimization of Deep Neural Networks

One of the key challenges in designing machine learning systems is to de...
research
12/24/2019

Pruning Deep Neural Networks Architectures with Evolution Strategy

Currently, Deep Neural Networks (DNNs) are used to solve all kinds of pr...
research
04/08/2021

Multi-Objective Optimization of a Path-following MPC for Vehicle Guidance: A Bayesian Optimization Approach

This paper tackles the multi-objective optimization of the cost function...
research
10/30/2020

Bayesian Optimization Meets Laplace Approximation for Robotic Introspection

In robotics, deep learning (DL) methods are used more and more widely, b...
research
06/11/2019

PABO: Pseudo Agent-Based Multi-Objective Bayesian Hyperparameter Optimization for Efficient Neural Accelerator Design

The ever increasing computational cost of Deep Neural Networks (DNN) and...
research
11/07/2018

FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks

There exists a plethora of techniques for inducing structured sparsity i...

Please sign up or login with your details

Forgot password? Click here to reset