Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks

10/10/2021
by   Moshe Eliasof, et al.
0

Graph Convolutional Networks (GCNs) are widely used in a variety of applications, and can be seen as an unstructured version of standard Convolutional Neural Networks (CNNs). As in CNNs, the computational cost of GCNs for large input graphs (such as large point clouds or meshes) can be high and inhibit the use of these networks, especially in environments with low computational resources. To ease these costs, quantization can be applied to GCNs. However, aggressive quantization of the feature maps can lead to a significant degradation in performance. On a different note, Haar wavelet transforms are known to be one of the most effective and efficient approaches to compress signals. Therefore, instead of applying aggressive quantization to feature maps, we propose to utilize Haar wavelet compression and light quantization to reduce the computations and the bandwidth involved with the network. We demonstrate that this approach surpasses aggressive feature quantization by a significant margin, for a variety of problems ranging from node classification to point cloud classification and part and semantic segmentation.

READ FULL TEXT

page 1

page 5

research
05/24/2022

Wavelet Feature Maps Compression for Image-to-Image CNNs

Convolutional Neural Networks (CNNs) are known for requiring extensive c...
research
08/31/2021

Quantized convolutional neural networks through the lens of partial differential equations

Quantization of Convolutional Neural Networks (CNNs) is a common approac...
research
01/29/2018

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

It is desirable to train convolutional networks (CNNs) to run more effic...
research
05/23/2021

Revisiting 2D Convolutional Neural Networks for Graph-based Applications

Graph convolutional networks (GCNs) are widely used in graph-based appli...
research
11/10/2021

An Underexplored Dilemma between Confidence and Calibration in Quantized Neural Networks

Modern convolutional neural networks (CNNs) are known to be overconfiden...
research
03/09/2021

MWQ: Multiscale Wavelet Quantized Neural Networks

Model quantization can reduce the model size and computational latency, ...
research
03/30/2016

Vector Quantization for Machine Vision

This paper shows how to reduce the computational cost for a variety of c...

Please sign up or login with your details

Forgot password? Click here to reset