Ordinal Pooling

by   Adrien Deliège, et al.

In the framework of convolutional neural networks, downsampling is often performed with an average-pooling, where all the activations are treated equally, or with a max-pooling operation that only retains an element with maximum activation while discarding the others. Both of these operations are restrictive and have previously been shown to be sub-optimal. To address this issue, a novel pooling scheme, named ordinal pooling, is introduced in this work. Ordinal pooling rearranges all the elements of a pooling region in a sequence and assigns a different weight to each element based upon its order in the sequence. These weights are used to compute the pooling operation as a weighted sum of the rearranged elements of the pooling region. They are learned via a standard gradient-based training, allowing to learn a behavior anywhere in the spectrum of average-pooling to max-pooling in a differentiable manner. Our experiments suggest that it is advantageous for the networks to perform different types of pooling operations within a pooling layer and that a hybrid behavior between average- and max-pooling is often beneficial. More importantly, they also demonstrate that ordinal pooling leads to consistent improvements in the accuracy over average- or max-pooling operations while speeding up the training and alleviating the issue of the choice of the pooling operations and activation functions to be used in the networks. In particular, ordinal pooling mainly helps on lightweight or quantized deep learning architectures, as typically considered e.g. for embedded applications.


page 1

page 2

page 3

page 4


Ordinal Pooling Networks: For Preserving Information over Shrinking Feature Maps

In the framework of convolutional neural networks that lie at the heart ...

Sorted Pooling in Convolutional Networks for One-shot Learning

We present generalized versions of the commonly used maximum pooling ope...

Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree

We seek to improve deep neural networks by generalizing the pooling oper...

Max-Pooling Dropout for Regularization of Convolutional Neural Networks

Recently, dropout has seen increasing use in deep learning. For deep con...

LogAvgExp Provides a Principled and Performant Global Pooling Operator

We seek to improve the pooling operation in neural networks, by applying...

Regularized Pooling

In convolutional neural networks (CNNs), pooling operations play importa...

ProxyNCA++: Revisiting and Revitalizing Proxy Neighborhood Component Analysis

We consider the problem of distance metric learning (DML), where the tas...

Please sign up or login with your details

Forgot password? Click here to reset