Representation Learning for Sequence Data with Deep Autoencoding Predictive Components

10/07/2020
by   Junwen Bai, et al.
0

We propose Deep Autoencoding Predictive Components (DAPC) – a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step. In contrast to the mutual information lower bound commonly used by contrastive learning, the estimate of predictive information we adopt is exact under a Gaussian assumption. Additionally, it can be computed without negative sampling. To reduce the degeneracy of the latent space extracted by powerful encoders and keep useful information from the inputs, we regularize predictive information learning with a challenging masked reconstruction loss. We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2022

Compressed Predictive Information Coding

Unsupervised learning plays an important role in many fields, such as ar...
research
10/22/2022

Guided contrastive self-supervised pre-training for automatic speech recognition

Contrastive Predictive Coding (CPC) is a representation learning method ...
research
05/23/2023

Self-Supervised Gaussian Regularization of Deep Classifiers for Mahalanobis-Distance-Based Uncertainty Estimation

Recent works show that the data distribution in a network's latent space...
research
12/25/2020

Evolution Is All You Need: Phylogenetic Augmentation for Contrastive Learning

Self-supervised representation learning of biological sequence embedding...
research
09/26/2022

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

In principle, applying variational autoencoders (VAEs) to sequential dat...
research
08/05/2021

M2IOSR: Maximal Mutual Information Open Set Recognition

In this work, we aim to address the challenging task of open set recogni...
research
06/21/2019

Connectivity-Optimized Representation Learning via Persistent Homology

We study the problem of learning representations with controllable conne...

Please sign up or login with your details

Forgot password? Click here to reset