Saturating Auto-Encoders

01/16/2013
by   Rostislav Goroshin, et al.
0

We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.

READ FULL TEXT

page 4

page 6

page 7

research
09/14/2023

Improved Auto-Encoding using Deterministic Projected Belief Networks

In this paper, we exploit the unique properties of a deterministic proje...
research
03/04/2023

Lon-eå at SemEval-2023 Task 11: A Comparison of Activation Functions for Soft and Hard Label Prediction

We study the influence of different activation functions in the output l...
research
12/20/2014

Scoring and Classifying with Gated Auto-encoders

Auto-encoders are perhaps the best-known non-probabilistic methods for r...
research
04/21/2011

Learning invariant features through local space contraction

We present in this paper a novel approach for training deterministic aut...
research
06/28/2023

Empirical Loss Landscape Analysis of Neural Network Activation Functions

Activation functions play a significant role in neural network design by...
research
04/25/2022

Trainable Compound Activation Functions for Machine Learning

Activation functions (AF) are necessary components of neural networks th...
research
09/10/2020

Auto-encoders for Track Reconstruction in Drift Chambers for CLAS12

In this article we describe the development of machine learning models t...

Please sign up or login with your details

Forgot password? Click here to reset