Neumann networks: differential programming for supervised learning with missing values

by   Marine Le Morvan, et al.

The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann series approximation of the optimal predictor, we propose a new principled architecture, named Neumann networks. Their originality and strength comes from the use of a new type of non-linearity: the multiplication by the missingness indicator. We provide an upper bound on the Bayes risk of Neumann networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns. As a result they scale well to problems with many features, and remain statistically efficient for medium-sized samples. Moreover, we show that, contrary to procedures using EM or imputation, they are robust to the missing data mechanism, including difficult MNAR settings such as self-masking.


page 1

page 2

page 3

page 4


Missing Data Imputation for Supervised Learning

This paper compares methods for imputing missing categorical data for su...

Linear predictor on linearly-generated data with missing values: non consistency and solutions

We consider building predictors when the data have missing values. We st...

Deeply-Learned Generalized Linear Models with Missing Data

Deep Learning (DL) methods have dramatically increased in popularity in ...

The Edge of Orthogonality: A Simple View of What Makes BYOL Tick

Self-predictive unsupervised learning methods such as BYOL or SimSiam ha...

A Complete Characterisation of Structured Missingness

Our capacity to process large complex data sources is ever-increasing, p...

Minimax rate of consistency for linear models with missing values

Missing values arise in most real-world data sets due to the aggregation...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset