Practical and Consistent Estimation of f-Divergences

by   Paul K. Rubenstein, et al.

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning. Most works study this problem under very weak assumptions, in which case it is provably hard. We consider the case of stronger structural assumptions that are commonly satisfied in modern machine learning, including representation learning and generative modelling with autoencoder architectures. Under these assumptions we propose and study an estimator that can be easily implemented, works well in high dimensions, and enjoys faster rates of convergence. We verify the behavior of our estimator empirically in both synthetic and real-data experiments, and discuss its direct implications for total correlation, entropy, and mutual information estimation.


Multivariate f-Divergence Estimation With Confidence

The problem of f-divergence estimation is important in the fields of mac...

Beyond Normal: On the Evaluation of Mutual Information Estimators

Mutual information is a general statistical dependency measure which has...

Neural Joint Entropy Estimation

Estimating the entropy of a discrete random variable is a fundamental pr...

Neural Entropic Estimation: A faster path to mutual information estimation

We point out a limitation of the mutual information neural estimation (M...

On the Estimation of Information Measures of Continuous Distributions

The estimation of information measures of continuous distributions based...

Practical Estimation of Renyi Entropy

Entropy Estimation is an important problem with many applications in cry...

Inductive Mutual Information Estimation: A Convex Maximum-Entropy Copula Approach

We propose a novel estimator of the mutual information between two ordin...

Please sign up or login with your details

Forgot password? Click here to reset