Orthogonal Statistical Learning with Self-Concordant Loss

04/30/2022
by   Lang Liu, et al.
0

Orthogonal statistical learning and double machine learning have emerged as general frameworks for two-stage statistical prediction in the presence of a nuisance component. We establish non-asymptotic bounds on the excess risk of orthogonal statistical learning methods with a loss function satisfying a self-concordance property. Our bounds improve upon existing bounds by a dimension factor while lifting the assumption of strong convexity. We illustrate the results with examples from multiple treatment effect estimation and generalized partially linear modeling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2021

Higher-Order Orthogonal Causal Learning for Treatment Effect

Most existing studies on the double/debiased machine learning method con...
research
12/31/2022

Confidence Sets under Generalized Self-Concordance

This paper revisits a fundamental problem in statistical inference from ...
research
01/25/2019

Orthogonal Statistical Learning

We provide excess risk guarantees for statistical learning in the presen...
research
12/31/2020

Continuity of Generalized Entropy and Statistical Learning

We study the continuity property of the generalized entropy as a functio...
research
11/01/2017

Orthogonal Machine Learning: Power and Limitations

Double machine learning provides √(n)-consistent estimates of parameters...
research
05/19/2021

Localization, Convexity, and Star Aggregation

Offset Rademacher complexities have been shown to imply sharp, data-depe...
research
09/04/2019

Semiparametric Inference for Non-monotone Missing-Not-at-Random Data: the No Self-Censoring Model

We study the identification and estimation of statistical functionals of...

Please sign up or login with your details

Forgot password? Click here to reset