SemifreddoNets: Partially Frozen Neural Networks for Efficient Computer Vision Systems

06/12/2020
by   Leo F. Isikdogan, et al.
0

We propose a system comprised of fixed-topology neural networks having partially frozen weights, named SemifreddoNets. SemifreddoNets work as fully-pipelined hardware blocks that are optimized to have an efficient hardware implementation. Those blocks freeze a certain portion of the parameters at every layer and replace the corresponding multipliers with fixed scalers. Fixing the weights reduces the silicon area, logic delay, and memory requirements, leading to significant savings in cost and power consumption. Unlike traditional layer-wise freezing approaches, SemifreddoNets make a profitable trade between the cost and flexibility by having some of the weights configurable at different scales and levels of abstraction in the model. Although fixing the topology and some of the weights somewhat limits the flexibility, we argue that the efficiency benefits of this strategy outweigh the advantages of a fully configurable model for many use cases. Furthermore, our system uses repeatable blocks, therefore it has the flexibility to adjust model complexity without requiring any hardware change. The hardware implementation of SemifreddoNets provides up to an order of magnitude reduction in silicon area and power consumption as compared to their equivalent implementation on a general-purpose accelerator.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2022

HEAM: High-Efficiency Approximate Multiplier Optimization for Deep Neural Networks

We propose an optimization method for the automatic design of approximat...
research
05/21/2017

Some Schemes for Implementation of Arithmetic Operations with Complex Numbers Using Squaring Units

In this paper, new schemes for a squarer, multiplier and divider of comp...
research
12/11/2021

CHAMP: Coherent Hardware-Aware Magnitude Pruning of Integrated Photonic Neural Networks

We propose a novel hardware-aware magnitude pruning technique for cohere...
research
04/30/2018

Ultra Power-Efficient CNN Domain Specific Accelerator with 9.3TOPS/Watt for Mobile and Embedded Applications

Computer vision performances have been significantly improved in recent ...
research
09/09/2020

Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware

Keyword spotting (KWS) provides a critical user interface for many mobil...
research
12/04/2021

On the Implementation of Fixed-point Exponential Function for Machine Learning and Signal Processing Accelerators

The natural exponential function is widely used in modeling many enginee...
research
01/02/2020

A Machine Learning Imaging Core using Separable FIR-IIR Filters

We propose fixed-function neural network hardware that is designed to pe...

Please sign up or login with your details

Forgot password? Click here to reset