Ignorability in Statistical and Probabilistic Inference

by   M. Jaeger, et al.

When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed by maintaining this proper distinction are often prohibitive, one asks for conditions under which it can be safely ignored. Such conditions are given by the missing at random (mar) and coarsened at random (car) assumptions. In this paper we provide an in-depth analysis of several questions relating to mar/car assumptions. Main purpose of our study is to provide criteria by which one may evaluate whether a car assumption is reasonable for a particular data collecting or observational process. This question is complicated by the fact that several distinct versions of mar/car assumptions exist. We therefore first provide an overview over these different versions, in which we highlight the distinction between distributional and coarsening variable induced versions. We show that distributional versions are less restrictive and sufficient for most applications. We then address from two different perspectives the question of when the mar/car assumption is warranted. First we provide a static analysis that characterizes the admissibility of the car assumption in terms of the support structure of the joint probability distribution of complete data and incomplete observations. Here we obtain an equivalence characterization that improves and extends a recent result by Grunwald and Halpern. We then turn to a procedural analysis that characterizes the admissibility of the car assumption in terms of procedural models for the actual data (or observation) generating process. The main result of this analysis is that the stronger coarsened completely at random (ccar) condition is arguably the most reasonable assumption, as it alone corresponds to data coarsening procedures that satisfy a natural robustness property.


page 1

page 2

page 3

page 4


Calibrated inference: statistical inference that accounts for both sampling uncertainty and distributional uncertainty

During data analysis, analysts often have to make seemingly arbitrary de...

On the Relationship between Conditional (CAR) and Simultaneous (SAR) Autoregressive Models

We clarify relationships between conditional (CAR) and simultaneous (SAR...

Identifiability of Causal Graphs using Functional Models

This work addresses the following question: Under what assumptions on th...

Updating Probabilities

As examples such as the Monty Hall puzzle show, applying conditioning to...

Semiparametric Inference for Non-monotone Missing-Not-at-Random Data: the No Self-Censoring Model

We study the identification and estimation of statistical functionals of...

Correction of Faulty Background Knowledge based on Condition Aware and Revise Transformer for Question Answering

The study of question answering has received increasing attention in rec...

What is the best predictor that you can compute in five minutes using a given Bayesian hierarchical model?

The goal of this paper is to provide a way for statisticians to answer t...

Please sign up or login with your details

Forgot password? Click here to reset