Tensor Estimation with Nearly Linear Samples

by   Christina Lee Yu, et al.

There is a conjectured computational-statistical gap in terms of the number of samples needed to perform tensor estimation. In particular, for a low rank 3-order tensor with Θ(n) parameters, Barak and Moitra conjectured that Ω(n^3/2) samples are needed for polynomial time computation based on a reduction of a specific hard instance of a rank 1 tensor to the random 3-XOR distinguishability problem. In this paper, we take a complementary perspective and characterize a subclass of tensor instances that can be estimated with only O(n^1+κ) observations for any arbitrarily small constant κ > 0, nearly linear. If one considers the class of tensors with constant orthogonal CP-rank, the "hardness" of the instance can be parameterized by the minimum absolute value of the sum of latent factor vectors. If the sum of each latent factor vector is bounded away from zero, we present an algorithm that can perform tensor estimation with O(n^1+κ) samples for a t-order tensor, significantly less than the previous achievable bound of O(n^t/2), and close to the lower bound of Ω(n). This result suggests that amongst constant orthogonal CP-rank tensors, the set of computationally hard instances to estimate are in fact a small subset of all possible tensors.


page 1

page 2

page 3

page 4


Statistical and computational rates in high rank tensor estimation

Higher-order tensor datasets arise commonly in recommendation systems, n...

Finding tensor decompositions with sparse optimization

In this paper, we suggest a new method for a given tensor to find CP dec...

A New Sampling Technique for Tensors

In this paper we propose new techniques to sample arbitrary third-order ...

Recovery Guarantees for Quadratic Tensors with Limited Observations

We consider the tensor completion problem of predicting the missing entr...

Structured Low-Rank Tensors for Generalized Linear Models

Recent works have shown that imposing tensor structures on the coefficie...

Machine learning with tree tensor networks, CP rank constraints, and tensor dropout

Tensor networks approximate order-N tensors with a reduced number of deg...

Tensor denoising with trend filtering

We extend the notion of trend filtering to tensors by considering the k^...

Please sign up or login with your details

Forgot password? Click here to reset