Formal Limitations on the Measurement of Mutual Information

11/10/2018
by   David McAllester, et al.
4

Motivate by applications to unsupervised learning, we consider the problem of measuring mutual information. Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods. In this paper we prove that serious statistical limitations are inherent to any measurement method. More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than O( N) where N is the size of the data sample. We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than N. While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees. We suggest expressing mutual information as a difference of entropies and using cross-entropy as an entropy estimator. We observe that, although cross-entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross-entropy at the rate of 1/√(N).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2021

Neural Network Classifier as Mutual Information Evaluator

Cross-entropy loss with softmax output is a standard choice to train neu...
research
05/29/2023

Cross-Entropy Estimators for Sequential Experiment Design with Reinforcement Learning

Reinforcement learning can effectively learn amortised design policies f...
research
07/30/2021

A Training-Based Mutual Information Lower Bound for Large-Scale Systems

We provide a mutual information lower bound that can be used to analyze ...
research
05/10/2021

Neural Computation of Capacity Region of Memoryless Multiple Access Channels

This paper provides a numerical framework for computing the achievable r...
research
10/01/2008

Determining the Unithood of Word Sequences using Mutual Information and Independence Measure

Most works related to unithood were conducted as part of a larger effort...
research
11/23/2022

Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems

As one of the central tasks in machine learning, regression finds lots o...
research
08/31/2021

APS: Active Pretraining with Successor Features

We introduce a new unsupervised pretraining objective for reinforcement ...

Please sign up or login with your details

Forgot password? Click here to reset