High Dimensional Linear Regression using Lattice Basis Reduction

03/18/2018
by   David Gamarnik, et al.
0

We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector β^* from n noisy linear observations Y=Xβ^*+W ∈R^n, for known X ∈R^n × p and unknown W ∈R^n. Unlike most of the literature on this model we make no sparsity assumption on β^*. Instead we adopt a regularization based on assuming that the underlying vectors β^* have rational entries with the same denominator Q ∈Z_>0. We call this Q-rationality assumption. We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector β^* for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n=1). Furthermore, we prove that in the case of the Gaussian white noise for W, n=o(p/ p) and Q sufficiently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2019

Inference in High-Dimensional Linear Regression via Lattice Basis Reduction and Integer Relation Detection

We focus on the high-dimensional linear regression problem, where the al...
research
05/19/2017

Linear regression without correspondence

This article considers algorithmic and statistical aspects of linear reg...
research
10/12/2018

An Algebraic-Geometric Approach to Shuffled Linear Regression

Shuffled linear regression is the problem of performing a linear regress...
research
11/14/2017

Sparse High-Dimensional Linear Regression. Algorithmic Barriers and a Local Search Algorithm

We consider a sparse high dimensional regression model where the goal is...
research
01/16/2017

High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition

We consider a sparse linear regression model Y=Xβ^*+W where X has a Gaus...
research
07/25/2018

A model-free approach to linear least squares regression with exact probabilities

In a regression setting with observation vector y ∈ R^n and given finite...
research
08/17/2023

Polynomial Bounds for Learning Noisy Optical Physical Unclonable Functions and Connections to Learning With Errors

It is shown that a class of optical physical unclonable functions (PUFs)...

Please sign up or login with your details

Forgot password? Click here to reset