Online Tensor Learning: Computational and Statistical Trade-offs, Adaptivity and Optimal Regret

06/06/2023
by   Jian-Feng Cai, et al.
0

We investigate a generalized framework for estimating latent low-rank tensors in an online setting, encompassing both linear and generalized linear models. This framework offers a flexible approach for handling continuous or categorical variables. Additionally, we investigate two specific applications: online tensor completion and online binary tensor learning. To address these challenges, we propose the online Riemannian gradient descent algorithm, which demonstrates linear convergence and the ability to recover the low-rank component under appropriate conditions in all applications. Furthermore, we establish a precise entry-wise error bound for online tensor completion. Notably, our work represents the first attempt to incorporate noise in the online low-rank tensor recovery task. Intriguingly, we observe a surprising trade-off between computational and statistical aspects in the presence of noise. Increasing the step size accelerates convergence but leads to higher statistical error, whereas a smaller step size yields a statistically optimal estimator at the expense of slower convergence. Moreover, we conduct regret analysis for online tensor regression. Under the fixed step size regime, a fascinating trilemma concerning the convergence rate, statistical error rate, and regret is observed. With an optimal choice of step size we achieve an optimal regret of O(√(T)). Furthermore, we extend our analysis to the adaptive setting where the horizon T is unknown. In this case, we demonstrate that by employing different step sizes, we can attain a statistically optimal error rate along with a regret of O(log T). To validate our theoretical claims, we provide numerical results that corroborate our findings and support our assertions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

An Optimal Statistical and Computational Framework for Generalized Tensor Estimation

This paper describes a flexible framework for generalized low-rank tenso...
research
03/16/2021

Generalized Low-rank plus Sparse Tensor Estimation by Fast Riemannian Optimization

We investigate a generalized framework to estimate a latent low-rank plu...
research
11/14/2017

Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries

In this article, we develop methods for estimating a low rank tensor fro...
research
04/24/2021

Low-rank Tensor Estimation via Riemannian Gauss-Newton: Statistical Optimality and Second-Order Convergence

In this paper, we consider the estimation of a low Tucker rank tensor fr...
research
09/06/2023

Quantile and pseudo-Huber Tensor Decomposition

This paper studies the computational and statistical aspects of quantile...
research
06/17/2022

Tensor-on-Tensor Regression: Riemannian Optimization, Over-parameterization, Statistical-computational Gap, and Their Interplay

We study the tensor-on-tensor regression, where the goal is to connect t...
research
09/11/2019

Implicit Regularization for Optimal Sparse Recovery

We investigate implicit regularization schemes for gradient descent meth...

Please sign up or login with your details

Forgot password? Click here to reset