Deep Learning by Scattering

06/24/2013
by   Stéphane Mallat, et al.
0

We introduce general scattering transforms as mathematical models of deep neural networks with l2 pooling. Scattering networks iteratively apply complex valued unitary operators, and the pooling is performed by a complex modulus. An expected scattering defines a contractive representation of a high-dimensional probability distribution, which preserves its mean-square norm. We show that unsupervised learning can be casted as an optimization of the space contraction to preserve the volume occupied by unlabeled examples, at each layer of the network. Supervised learning and classification are performed with an averaged scattering, which provides scattering estimations for multiple classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/07/2021

Deep Scattering Network with Max-pooling

Scattering network is a convolutional network, consisting of cascading c...
research
10/07/2018

Graph Classification with Geometric Scattering

One of the most notable contributions of deep learning is the applicatio...
research
12/01/2017

Unsupervised Classification of PolSAR Data Using a Scattering Similarity Measure Derived from a Geodesic Distance

In this letter, we propose a novel technique for obtaining scattering co...
research
12/02/2020

Tensor Data Scattering and the Impossibility of Slicing Theorem

This paper establishes a broad theoretical framework for tensor data dis...
research
11/23/2020

Scattering Transform Based Image Clustering using Projection onto Orthogonal Complement

In the last few years, large improvements in image clustering have been ...
research
12/18/2020

Separation and Concentration in Deep Networks

Numerical experiments demonstrate that deep neural network classifiers p...

Please sign up or login with your details

Forgot password? Click here to reset