Recovery of Future Data via Convolution Nuclear Norm Minimization
This paper is about recovering the unseen future data from a given sequence of historical samples, so called as future data recovery—a significant problem closely related to time series forecasting. To address the problem, it is now prevalent to use deep neural networks, which are actually built upon the hypothesis that the desired evolution law can be learnt by using many observed samples to feed an overparameterized network. In practice, however, it is not always feasible to obtain a huge mass of training samples. To overcome the issue, we would suggest to consider a different methodology. Namely, we convert future data recovery into a more inclusive problem called sequential tensor completion (STC), which is to restore a latent tensor of sequential structure from a sampling of its entries. Unlike the ordinary tensor completion problem studied in the majority of literature, STC has a distinctive setup that allows the locations of missing entries to be distributed arbitrarily, integrating seamlessly the future values of time series into the framework of missing data. Then we propose two methods to address STC, including Discrete Fourier Transform based ℓ_1 minimization (DFT_ℓ_1) and Convolution Nuclear Norm Minimization (CNNM). We provide theoretical results to guarantee the recovery performance of the proposed methods. Remarkably, our theories disclose an important message; that is, under certain conditions, the unseen future values are indeed recoverable from the historical observations. Experiments on univariate time series, images and videos show encouraging results.
READ FULL TEXT