Active Linear Regression

06/20/2019
by   Xavier Fontaine, et al.
0

We consider the problem of active linear regression where a decision maker has to choose between several covariates to sample in order to obtain the best estimate β̂ of the parameter β^ of the linear model, in the sense of minimizing Eβ̂-β^^2. Using bandit and convex optimization techniques we propose an algorithm to define the sampling strategy of the decision maker and we compare it with other algorithms. We provide theoretical guarantees of our algorithm in different settings, including a O(T^-2) regret bound in the case where the covariates form a basis of the feature space, generalizing and improving existing results. Numerical experiments validate our theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2021

Stochastic Online Linear Regression: the Forward Algorithm to Replace Ridge

We consider the problem of online linear regression in the stochastic se...
research
02/18/2023

Online Instrumental Variable Regression: Regret Analysis and Bandit Feedback

The independence of noise and covariates is a standard assumption in onl...
research
03/12/2018

Scalable Algorithms for Learning High-Dimensional Linear Mixed Models

Linear mixed models (LMMs) are used extensively to model dependecies of ...
research
02/09/2016

Online Active Linear Regression via Thresholding

We consider the problem of online active learning to collect data for re...
research
06/01/2019

Robust approximate linear regression without correspondence

Estimating regression coefficients using unordered multisets of covariat...
research
07/05/2023

D-optimal Subsampling Design for Massive Data Linear Regression

Data reduction is a fundamental challenge of modern technology, where cl...
research
06/02/2019

Nonparametric Functional Approximation with Delaunay Triangulation

We propose a differentiable nonparametric algorithm, the Delaunay triang...

Please sign up or login with your details

Forgot password? Click here to reset