Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra

05/10/2023
โˆ™
by   Rajarshi Bhattacharjee, et al.
โˆ™
0
โˆ™

Given ๐€โˆˆโ„^n ร— n with entries bounded in magnitude by 1, it is well-known that if S โŠ‚ [n] ร— [n] is a uniformly random subset of ร• (n/ฯต^2) entries, and if ๐€_S equals ๐€ on the entries in S and is zero elsewhere, then ๐€ - n^2/sยท๐€_S_2 โ‰คฯต n with high probability, where ยท_2 is the spectral norm. We show that for positive semidefinite (PSD) matrices, no randomness is needed at all in this statement. Namely, there exists a fixed subset S of ร• (n/ฯต^2) entries that acts as a universal sparsifier: the above error bound holds simultaneously for every bounded entry PSD matrix ๐€โˆˆโ„^n ร— n. One can view this result as a significant extension of a Ramanujan expander graph, which sparsifies any bounded entry PSD matrix, not just the all ones matrix. We leverage the existence of such universal sparsifiers to give the first deterministic algorithms for several central problems related to singular value computation that run in faster than matrix multiplication time. We also prove universal sparsification bounds for non-PSD matrices, showing that ร• (n/ฯต^4) entries suffices to achieve error ฯตยทmax(n,๐€_1), where ๐€_1 is the trace norm. We prove that this is optimal up to an ร• (1/ฯต^2) factor. Finally, we give an improved deterministic spectral approximation algorithm for PSD ๐€ with entries lying in {-1,0,1}, which we show is nearly information-theoretically optimal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
โˆ™ 12/09/2019

Robust and Sample Optimal Algorithms for PSD Low-Rank Approximation

Recently, Musco and Woodruff (FOCS, 2017) showed that given an n ร— n pos...
research
โˆ™ 11/09/2020

Reduced-Rank Regression with Operator Norm Error

A common data analysis task is the reduced-rank regression problem: ...
research
โˆ™ 01/25/2019

The conjugate gradient algorithm on well-conditioned Wishart matrices is almost deteriministic

We prove that the number of iterations required to solve a random positi...
research
โˆ™ 06/27/2018

Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices

We design a sublinear-time approximation algorithm for quadratic functio...
research
โˆ™ 11/26/2019

Pseudo-deterministic Streaming

A pseudo-deterministic algorithm is a (randomized) algorithm which, when...
research
โˆ™ 06/03/2020

Whitening long range dependence in large sample covariance matrices of multivariate stationary processes

Let ๐— be an Nร— T data matrix which can be represented as ๐—=๐‚_N^1/2๐™๐‘_T^1...
research
โˆ™ 02/15/2022

Bohemian Matrix Geometry

A Bohemian matrix family is a set of matrices all of whose entries are d...

Please sign up or login with your details

Forgot password? Click here to reset