On Finite-Sample Identifiability of Contrastive Learning-Based Nonlinear Independent Component Analysis

06/14/2022
by   Qi Lyu, et al.
0

Nonlinear independent component analysis (nICA) aims at recovering statistically independent latent components that are mixed by unknown nonlinear functions. Central to nICA is the identifiability of the latent components, which had been elusive until very recently. Specifically, Hyvärinen et al. have shown that the nonlinearly mixed latent components are identifiable (up to often inconsequential ambiguities) under a generalized contrastive learning (GCL) formulation, given that the latent components are independent conditioned on a certain auxiliary variable. The GCL-based identifiability of nICA is elegant, and establishes interesting connections between nICA and popular unsupervised/self-supervised learning paradigms in representation learning, causal learning, and factor disentanglement. However, existing identifiability analyses of nICA all build upon an unlimited sample assumption and the use of ideal universal function learners – which creates a non-negligible gap between theory and practice. Closing the gap is a nontrivial challenge, as there is a lack of established “textbook” routine for finite sample analysis of such unsupervised problems. This work puts forth a finite-sample identifiability analysis of GCL-based nICA. Our analytical framework judiciously combines the properties of the GCL loss function, statistical generalization analysis, and numerical differentiation. Our framework also takes the learning function's approximation error into consideration, and reveals an intuitive trade-off between the complexity and expressiveness of the employed function learner. Numerical experiments are used to validate the theorems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2020

Stochastic Approximation for Online Tensorial Independent Component Analysis

Independent component analysis (ICA) has been a popular dimension reduct...
research
03/29/2023

Nonlinear Independent Component Analysis for Principled Disentanglement in Unsupervised Deep Learning

A central problem in unsupervised deep learning is how to find useful re...
research
06/16/2021

Identifiability-Guaranteed Simplex-Structured Post-Nonlinear Mixture Learning via Autoencoder

This work focuses on the problem of unraveling nonlinearly mixed latent ...
research
10/14/2022

Provable Subspace Identification Under Post-Nonlinear Mixtures

Unsupervised mixture learning (UML) aims at identifying linearly or nonl...
research
11/27/2018

Extracting conditionally heteroscedastic components using ICA

In the independent component model, the multivariate data is assumed to ...
research
08/12/2022

Function Classes for Identifiable Nonlinear Independent Component Analysis

Unsupervised learning of latent variable models (LVMs) is widely used to...

Please sign up or login with your details

Forgot password? Click here to reset