Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?

11/05/2017
by   Cameron Musco, et al.
0

Low-rank approximation is a common tool used to accelerate kernel methods: the n × n kernel matrix K is approximated via a rank-k matrix K̃ which can be stored in much less space and processed more quickly. In this work we study the limits of computationally efficient low-rank kernel approximation. We show that for a broad class of kernels, including the popular Gaussian and polynomial kernels, computing a relative error k-rank approximation to K is at least as difficult as multiplying the input data matrix A ∈R^n × d by an arbitrary matrix C ∈R^d × k. Barring a breakthrough in fast matrix multiplication, when k is not too large, this requires Ω(nnz(A)k) time where nnz(A) is the number of non-zeros in A. This lower bound matches, in many parameter regimes, recent work on subquadratic time algorithms for low-rank approximation of general kernels [MM16,MW17], demonstrating that these algorithms are unlikely to be significantly improved, in particular to O(nnz(A)) input sparsity runtimes. At the same time there is hope: we show for the first time that O(nnz(A)) time approximation is possible for general radial basis function kernels (e.g., the Gaussian kernel) for the closely related problem of low-rank approximation of the kernelized dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2015

Structured Block Basis Factorization for Scalable Kernel Matrix Evaluation

Kernel matrices are popular in machine learning and scientific computing...
research
11/05/2014

On the Complexity of Learning with Kernels

A well-recognized limitation of kernel learning is the requirement to ha...
research
02/04/2022

Learning Representation from Neural Fisher Kernel with Low-rank Approximation

In this paper, we study the representation of neural networks from the v...
research
06/04/2021

Kernel approximation on algebraic varieties

Low-rank approximation of kernels is a fundamental mathematical problem ...
research
02/26/2016

Multivariate Hawkes Processes for Large-scale Inference

In this paper, we present a framework for fitting multivariate Hawkes pr...
research
01/05/2021

Kernel optimization for Low-Rank Multi-Fidelity Algorithms

One of the major challenges for low-rank multi-fidelity (MF) approaches ...
research
07/08/2020

Learning from DPPs via Sampling: Beyond HKPV and symmetry

Determinantal point processes (DPPs) have become a significant tool for ...

Please sign up or login with your details

Forgot password? Click here to reset