Subspace Embeddings Under Nonlinear Transformations

10/05/2020
by   Aarshvi Gajjar, et al.
17

We consider low-distortion embeddings for subspaces under entrywise nonlinear transformations. In particular we seek embeddings that preserve the norm of all vectors in a space S = {y: y = f(x) for x ∈ Z}, where Z is a k-dimensional subspace of ℝ^n and f(x) is a nonlinear activation function applied entrywise to x. When f is the identity, and so S is just a k-dimensional subspace, it is known that, with high probability, a random embedding into O(k/ϵ^2) dimensions preserves the norm of all y ∈ S up to (1±ϵ) relative error. Such embeddings are known as subspace embeddings, and have found widespread use in compressed sensing and approximation algorithms. We give the first low-distortion embeddings for a wide class of nonlinear functions f. In particular, we give additive ϵ error embeddings into O(klog (n/ϵ)/ϵ^2) dimensions for a class of nonlinearities that includes the popular Sigmoid SoftPlus, and Gaussian functions. We strengthen this result to give relative error embeddings under some further restrictions, which are satisfied e.g., by the Tanh, SoftSign, Exponential Linear Unit, and many other `soft' step functions and rectifying units. Understanding embeddings for subspaces under nonlinear transformations is a key step towards extending random sketching and compressing sensing techniques for linear problems to nonlinear ones. We discuss example applications of our results to improved bounds for compressed sensing via generative neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2018

Tight Bounds for ℓ_p Oblivious Subspace Embeddings

An ℓ_p oblivious subspace embedding is a distribution over r × n matrice...
research
11/22/2018

Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

Random projections are able to perform dimension reduction efficiently f...
research
12/17/2019

Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares

In this paper new general modewise Johnson-Lindenstrauss (JL) subspace e...
research
07/14/2019

Compressed Subspace Learning Based on Canonical Angle Preserving Property

A standard way to tackle the challenging task of learning from high-dime...
research
01/02/2020

Convergence bounds for empirical nonlinear least-squares

We consider best approximation problems in a (nonlinear) subspace M of a...
research
10/24/2017

Impossibility of dimension reduction in the nuclear norm

Let S_1 (the Schatten--von Neumann trace class) denote the Banach space ...
research
11/11/2013

Toward a unified theory of sparse dimensionality reduction in Euclidean space

Let Φ∈R^m× n be a sparse Johnson-Lindenstrauss transform [KN14] with s n...

Please sign up or login with your details

Forgot password? Click here to reset