Uniform Approximations for Randomized Hadamard Transforms with Applications

03/03/2022
by   Yeshwanth Cherapanamjeri, et al.
0

Randomized Hadamard Transforms (RHTs) have emerged as a computationally efficient alternative to the use of dense unstructured random matrices across a range of domains in computer science and machine learning. For several applications such as dimensionality reduction and compressed sensing, the theoretical guarantees for methods based on RHTs are comparable to approaches using dense random matrices with i.i.d. entries. However, several such applications are in the low-dimensional regime where the number of rows sampled from the matrix is rather small. Prior arguments are not applicable to the high-dimensional regime often found in machine learning applications like kernel approximation. Given an ensemble of RHTs with Gaussian diagonals, {M^i}_i = 1^m, and any 1-Lipschitz function, f: ℝ→ℝ, we prove that the average of f over the entries of {M^i v}_i = 1^m converges to its expectation uniformly over v ≤ 1 at a rate comparable to that obtained from using truly Gaussian matrices. We use our inequality to then derive improved guarantees for two applications in the high-dimensional regime: 1) kernel approximation and 2) distance estimation. For kernel approximation, we prove the first uniform approximation guarantees for random features constructed through RHTs lending theoretical justification to their empirical success while for distance estimation, our convergence result implies data structures with improved runtime guarantees over previous work by the authors. We believe our general inequality is likely to find use in other applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2017

The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings

We examine a class of embeddings based on structured random matrices wit...
research
04/21/2022

Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression

We study the spectrum of inner-product kernel matrices, i.e., n × n matr...
research
06/07/2020

Constant-Expansion Suffices for Compressed Sensing with Generative Priors

Generative neural networks have been empirically found very promising in...
research
01/23/2023

Sampling-based Nyström Approximation and Kernel Quadrature

We analyze the Nyström approximation of a positive definite kernel assoc...
research
05/22/2020

The Average-Case Time Complexity of Certifying the Restricted Isometry Property

In compressed sensing, the restricted isometry property (RIP) on M × N s...
research
06/18/2020

Precise expressions for random projections: Low-rank approximation and randomized Newton

It is often desirable to reduce the dimensionality of a large dataset by...
research
06/06/2015

Optimal Rates for Random Fourier Features

Kernel methods represent one of the most powerful tools in machine learn...

Please sign up or login with your details

Forgot password? Click here to reset